All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

from the below query,  i am running for 2 to 3 and posted the output and ran again same query from 3 to 4 and posted the output. i want a query where i can compare pervious hour(2 to 3 data) with (3... See more...
from the below query,  i am running for 2 to 3 and posted the output and ran again same query from 3 to 4 and posted the output. i want a query where i can compare pervious hour(2 to 3 data) with (3 to 4) data  and i want to calculate the difference percentage  |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application output: 02:00 to 03:00 hours data _time application Trans 2022-01-22 02:00 app1 3456.000000 2022-01-22 02:00 app2 5632.000000 2022-01-22 02:00 app3 5643.000000 2022-01-22 02:00 app4 16543.00000   03:00 to 04:00 hours data output: _time application Trans 2022-01-22 03:00 app1 8753.000000 2022-01-22 03:00 app2 342.000000 2022-01-22 03:00 app3 87653.000000 2022-01-22 03:00 app4 8912.00000
Below are the sample logs , i am not getting how to write props line breaker. can anyone help on this. A0C0A0H8~~AB~ABCg.C~AB~Wed Jan 11 19:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY B0C0A0K8~~AB~ABC... See more...
Below are the sample logs , i am not getting how to write props line breaker. can anyone help on this. A0C0A0H8~~AB~ABCg.C~AB~Wed Jan 11 19:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY B0C0A0K8~~AB~ABCUHg.C~AB~Mon Jan 10 20:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY1245 D0C01010~~CD~SDRg.D~HH~Thu Jan 20 11:11:17 IST 2021~C~0.00~0.00~0.01~Z~1140AU A0C01212~~AB~ABCg.C~AB~Wed Jan 11 19:11:17 IST 2021~C~0.00~0.00~0.01~Z~1HTYYY    
We have a small satellite deployment of 40+ servers, that have a dedicated HF doubling as a Deployment Server running on Linux.  Equal mix of Windows and Linux.  24h ago discovered that a few of the ... See more...
We have a small satellite deployment of 40+ servers, that have a dedicated HF doubling as a Deployment Server running on Linux.  Equal mix of Windows and Linux.  24h ago discovered that a few of the Windows servers were now reporting that they no longer had the Windows_TA installed, but instead were running the Linux_TA.  Checking the UF hosts directly, they in fact were running the Windows_TA even though the DS was reporting they were running the Linux_TA?? After a day of trying to figure out how (validated filters, tested, removed and readded all Server Classes, and Apps), it continued.  Noticed throughout the day a few more were now reporting this "mix-up", and again validated those reporting Linux_TA were running Windows_TA.  As a final drastic measure, removed Splunk from the host (the HF/DS, not the UF's), reinstalled from scratch, and created the environment new.  Made sure the UF's were not running any of the distributed apps/ta's.  Built new Apps, Server Class.  The UF's started phoning home, and once again, the Windows servers were reporting the Linux_TA, but running the Windows_TA
I have an on-prem splunk enterprise installation, consisting exclusively of Universal forwarders and a single Indexer. We now have a cloud-hosted environment, that it restricted, as it is hosted by ... See more...
I have an on-prem splunk enterprise installation, consisting exclusively of Universal forwarders and a single Indexer. We now have a cloud-hosted environment, that it restricted, as it is hosted by an external company.  They do not allow us to install any software (but their own) on the servers. Is there any way to get data into my Indexer, without a forwarder? Without a forwarder, am I able to apply allow/deny lists to events?
I want mask some data coming from web server logs particularly only one server out of all my web server logs. Can I apply my masking rule to only one my webserver  source instead of all my web server... See more...
I want mask some data coming from web server logs particularly only one server out of all my web server logs. Can I apply my masking rule to only one my webserver  source instead of all my web server sending to the same sourcetype? If I apply this rule to all web server log it will be high resource usage at my indexer? Thanks
Hi all,   Im analysing event counts for a specific search criteria and I want to know how the count of values changed over time.  Below search is not good enough to see whats going on as many use... See more...
Hi all,   Im analysing event counts for a specific search criteria and I want to know how the count of values changed over time.  Below search is not good enough to see whats going on as many usernames have huge number of events and some with small numbers are barely noticeable (Im interested in rate of change and not count itself) ``` index=test_index "search string" | timechart span=10m count(field1) by username ``` So I want to see a rate of change of the count rather than simple count, by username field. How can we achieve this?
I have 2 servers (hosts) and I need to create an alert so that when the difference in value (or load) between the 2 hosts is greater than 50 percent, it gives results (alerts).
I have three tables. Each has one or more ID fields (out of ID_A, ID_B, ID_C) and assigns values Xn, Yn, Zn to these IDs. In effect, the tables each contain a fragment of information from a set of ob... See more...
I have three tables. Each has one or more ID fields (out of ID_A, ID_B, ID_C) and assigns values Xn, Yn, Zn to these IDs. In effect, the tables each contain a fragment of information from a set of objects 1...5. Table X: ID_A ID_B X1 X2 A1 B1 X1_1 X2_1 A2 B2 X1_2a X2_2 A2 B2 X1_2b X2_2 A3 B3 X1_3 X2_3 Table Y: ID_A ID_B Y1 Y2 A2 B2 Y1_2   A2 B2   Y2_2 A3 B3   Y2_3a A3 B3   Y2_3b A4 B4 Y1_4 Y2_4   Table Z: ID_B ID_C Z1 B1 C1 Z1_1 B3 C3 Z1_3 B5 C5 Z1_5 How can I create the superset of all three tables, i.e. reconstruct the "full picture" about obects 1..5 as good as possible? I tried with union and join in various ways, but I keep tripping over the following obstacles: The 1:n relation between ID and values (which should remain expanded as individual rows) Empty fields in between (bad for stats list(...) or stats values(...) because of different-sized MV results) There is no single table that has references to all objects (e.g. object 5 only present in table Z).   Desired result: ID_A ID_B ID_C X1 X2 Y1 Y2 Z1 A1 B1 C1 X1_1 X2_1     Z1_1 A2 B2   X1_2a X2_2 Y1_2 Y2_2   A2 B2   X1_2b X2_2 Y1_2 Y2_2   A3 B3   X1_3 X2_3   Y2_3a Z1_3 A3 B3   X1_3 X2_3   Y2_3b Z1_3 A4 B4       Y1_4 Y2_4     B5 C5         Z1_5 Sample data:   | makeresults | eval _raw="ID_A;ID_B;X1;X2 A1;B1;X1_1;X2_1 A2;B2;X1_2A;X2_2 A2;B2;X1_2B;X2_2 A3;B3;X1_3;X2_3 " | multikv forceheader=1 | table ID_A, ID_B, X1, X2 | append [ | makeresults | eval _raw="ID_A;ID_B;Y1;Y2 A2;B2;Y1_2; A2;B2;;Y2_2 A3;B3;Y1_3;Y2_3A A3;B3;Y1_3;Y2_3B A4;B4;Y1_4;Y2_4 " | multikv forceheader=1 | table ID_A, ID_B, Y1, Y2 ] | append [ | makeresults | eval _raw="ID_B;ID_C;Z1 B1;C1;Z1_1 B3;C3;Z1_3 B5;C5;Z1_5 " | multikv forceheader=1 | table ID_B, ID_C, Z1 ] | table ID_A, ID_B, ID_C, X1, X2, Y1, Y2, Z1    
Hi, I want to go through the splunk fundamentals 1 where I can get this link?  
Hello All,   I have created a dashboard and it is always showing no results found. But when i click on open in search or directly running the search query it is showing the results. can anyone he... See more...
Hello All,   I have created a dashboard and it is always showing no results found. But when i click on open in search or directly running the search query it is showing the results. can anyone help please. <form version="1.1" theme="light"> <label>Successful connections by an IP range dashboard</label> <search id="base_srch"> <query>index=prod sourcetype=auth_logs</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <fieldset submitButton="false"> <input type="time" token="time" searchWhenChanged="true"> <label>test</label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <search base="base_srch"> <query>|stats count by ip</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> </form>
Statement: You install  1Password Events Reporting for Splunk from   https://splunkbase.splunk.com/app/5632  Problem: You get error messages after correctly configuring it in the _internal index l... See more...
Statement: You install  1Password Events Reporting for Splunk from   https://splunkbase.splunk.com/app/5632  Problem: You get error messages after correctly configuring it in the _internal index like:        03-26-2024 11:37:30.974 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/audit_events" 2024/03/26 11:37:30 [DEBUG] POST https://events.1password.com/api/v1/auditevents 03-26-2024 11:37:27.672 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/signin_attempts" 2024/03/26 11:37:27 [DEBUG] POST https://events.1password.com/api/v1/signinattempts 03-26-2024 11:37:23.259 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/item_usages" 2024/03/26 11:37:23 [DEBUG] POST https://events.1password.com/api/v1/itemusages 03-26-2024 11:37:20.561 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/audit_events" 2024/03/26 11:37:20 [DEBUG] POST https://events.1password.com/api/v1/auditevents 03-26-2024 11:37:17.440 +0000 ERROR ExecProcessor [12044 ExecProcessor] - message from "/opt/splunk/etc/apps/onepassword_events_api/bin/signin_attempts" 2024/03/26 11:37:17 [DEBUG] POST https://events.1password.com/api/v1/signinattempts       How do you resolve this? The app was configured with a token, macros had indexes defined, interval for the scripted input set to a cron schedule.  Splunk 9.0.3 core standalone dev env. 
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following t... See more...
AIM: Integrate AppDynamics with a Kubernetes cluster using the provided documentation. Issue: I've set up a Kubernetes cluster and aimed to integrate it with AppDynamics for monitoring. Following the provided documentation, I successfully created the cluster agent. However, encountered errors during logging and found that the cluster data isn't showing up in the AppDynamics interface. Reference :Install the Cluster Agent with the Kubernetes CLI  Logs and Findings:  PS C:\Users\SajoSam> kubectl logs k8s-cluster-agent-5f8977b869-bpf5v CA_PROPERTIES= -appdynamics.agent.accountName=myaccount -appdynamics.controller.hostName=mycontroller.saas.appdynamics.com -appdynamics.controller.port=8080 -appdynamics.controller.ssl.enabled=false -appdynamics.agent.monitoredNamespaces=default -appdynamics.agent.event.upload.interval=10 -appdynamics.docker.container.registration.interval=120 -appdynamics.agent.httpClient.timeout.interval=30 APPDYNAMICS_AGENT_CLUSTER_NAME=onepane-cluster [ERROR]: 2024-03-26 09:55:04 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory [INFO]: 2024-03-26 09:55:04 - main.go:57 - check env variables and enable profiling if needed [INFO]: 2024-03-26 09:55:04 - agentprofiler.go:22 - Cluster Agent Profiling not enabled! [INFO]: 2024-03-26 09:55:04 - main.go:60 - Starting APPDYNAMICS CLUSTER AGENT version 24.2.0-317 [INFO]: 2024-03-26 09:55:04 - main.go:61 - Go lang version: go1.22.0 W0326 09:55:04.910967 7 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. [INFO]: 2024-03-26 09:55:04 - main.go:78 - Kubernetes version: v1.29.0 [INFO]: 2024-03-26 09:55:04 - main.go:233 - Registering cluster agent with controller host : mycontroller.saas.appdynamics.com controller port : 8080 account name : xxxxx [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:356 - Established connection to Kubernetes API [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:68 - Cluster name: onepane-cluster [INFO]: 2024-03-26 09:55:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:55:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:56:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:56:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:57:04 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "http://mycontroller.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": dial tcp 35.84.229.250:8080: i/o timeout (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-26 09:57:34 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-26 09:58:04 - agentregistrationmodule.go:119 - Initial Agent registration Question : 1. What could be the root cause of the failure to access the secret file /opt/appdynamics/cluster-agent/secret-volume/api-user? 2. What could be causing the timeout error during the registration request to the AppDynamics controller? Could you help me with this? Thank you ^ Post edited by @Ryan.Paredez to redact account name and controller name. For privacy and security reasons, please do not share your Account name or Controller URL. 
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype=... See more...
Hi Splunk team, We have been using similar below Splunk query across 15+ Splunk alerts but the count mentioned in email shows 4 times of actual failure occurrence. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) | stats count by Object, Failure_Message | sort count Below Splunk query is returning correct failure events. index="<your_index>" sourcetype="<your_sourcetype>" source="<your_source.log>" Business_App_ID=<your_appid> Object=* (Failure_Message=*0x01130006* OR Failure_Message=*0x01130009*) Can you please help in updating the Splunk query(mentioned 1st) to show correct count instead wrong one?
Good morning, I have started to ingest Palo Alto FW events and they are coming with a wrong timestamp, timestamp is 2 hour less than real time. I am going to show an example: This is a event in my... See more...
Good morning, I have started to ingest Palo Alto FW events and they are coming with a wrong timestamp, timestamp is 2 hour less than real time. I am going to show an example: This is a event in my SCP: My SCP is in Spain time (UTC+1), 11.06 right now. The events are coming wih timestamp of 9.06, although they are ingesting 11.06. PA server is in Mexico and timestamp in raw event is 4.06, 5 less hour. And heavy forwarder is also in mex but its hour is EDT time:   If i have explained me properly, how can i fix it?  
I have 3 different sources of the same filed. I want to aggregate all the 3 sources and get the distinct count of the field eg. sourcetype=source1 | stats dc(userlist) sourcetype=source2 | stats d... See more...
I have 3 different sources of the same filed. I want to aggregate all the 3 sources and get the distinct count of the field eg. sourcetype=source1 | stats dc(userlist) sourcetype=source2 | stats dc(line.userlist) sourcetype=source3 | stats dc(line.subject)   Here userlist, line.userlist, line.subject are all the same attributes but being logged differently. Now I want to get the dc of userlist+line.userlist+line.subject. Any help is appreciated.
Hi, I had couple of questions related to Splunk AI Assistant 1. Do we need to install Splunk AI Assistant on each Splunk Server that we using? 2. Can Splunk AI Assistant be called using API calls?... See more...
Hi, I had couple of questions related to Splunk AI Assistant 1. Do we need to install Splunk AI Assistant on each Splunk Server that we using? 2. Can Splunk AI Assistant be called using API calls? 3. Does Splunk AI Assistant provide SPL query or result of SPL query? 4. Based on users query, if there are multiple matches, will Splunk AI Assistant return all available SPL queries or the best match?
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are... See more...
Good morning fellow Splunkthiasts! I have an index with 100k+ events per minute (all of them having the same sourcetype), approximately 100 fields are known in this dataset. Some of these events are duplicit, while others are unique. My aim is to understand the duplication and be able to explain what events exactly get duplicated. I am detecting duplicities using this SPL:   index="myindex" sourcetype="mysourcetype" | eventstats count AS duplicates BY _time, _raw   Now I need to identify what fields or their combination make the difference, under what circumstances the event is ingested twice. I tried to use predict command, however it is somehow producing new values for "duplicates" field, but it does not disclose the rule by which it makes the decision. In other words, I am not interested in prediction itself, I want to know the predictors. Is something like that possible in SPL?
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  ... See more...
Hello everyone,  I'm coming to you for advice. I am currently working with splunk to create monitor WSO2-APIM instances.  According to the WSO2-APIM documentation, logs are generated as follows :  [2019-12-12 17:30:08,091] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,093] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Host: localhost:8243[\r][\n]" [2019-12-12 17:30:08,094] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "User-Agent: curl/7.54.0[\r][\n]" [2019-12-12 17:30:08,095] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "accept: */*[\r][\n]" [2019-12-12 17:30:08,096] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" [2019-12-12 17:30:08,097] DEBUG - wire HTTPS-Listener I/O dispatcher-5 >> "[\r][\n]" [2019-12-12 17:30:08,105] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" [2019-12-12 17:30:08,106] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "accept: */*[\r][\n]" [2019-12-12 17:30:08,107] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Host: www.mocky.io[\r][\n]" [2019-12-12 17:30:08,108] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "Connection: Keep-Alive[\r][\n]" [2019-12-12 17:30:08,109] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" [2019-12-12 17:30:08,110] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "[\r][\n]" [2019-12-12 17:30:08,266] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,268] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Server: Cowboy[\r][\n]" [2019-12-12 17:30:08,269] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Connection: keep-alive[\r][\n]" [2019-12-12 17:30:08,271] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,272] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,273] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Content-Length: 20[\r][\n]" [2019-12-12 17:30:08,274] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,275] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "[\r][\n]" [2019-12-12 17:30:08,276] DEBUG - wire HTTP-Sender I/O dispatcher-1 >> "{ "hello": "world" }" [2019-12-12 17:30:08,282] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "HTTP/1.1 200 OK[\r][\n]" [2019-12-12 17:30:08,283] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Expose-Headers: [\r][\n]" [2019-12-12 17:30:08,284] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Origin: *[\r][\n]" [2019-12-12 17:30:08,285] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Methods: GET[\r][\n]" [2019-12-12 17:30:08,286] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Content-Type: application/json[\r][\n]" [2019-12-12 17:30:08,287] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Via: 1.1 vegur[\r][\n]" [2019-12-12 17:30:08,288] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" [2019-12-12 17:30:08,289] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "Transfer-Encoding: chunked[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" [2019-12-12 17:30:08,290] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "14[\r][\n]" [2019-12-12 17:30:08,291] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "{ "hello": "world" }[\r][\n]" [2019-12-12 17:30:08,292] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "0[\r][\n]" [2019-12-12 17:30:08,293] DEBUG - wire HTTPS-Listener I/O dispatcher-5 << "[\r][\n]" And also according to the doc :  DEBUG - wire >> Represents the message coming into the API Gateway from the wire. DEBUG - wire << Represents the message that goes to the wire from the API Gateway.   I use AWS Lambda to retrieve the WSO2-APIM logs, which are stored in AWS CloudWatch. I've just started using Splunk so I'm not very good at SPL. I would like Splunk to process events with SPL and then output something like this : Date, loglevel, action_https, correlationID, message, duration [2019-12-12 17:30:08,091], DEBUG, HTTPS-Listener, dispatcher-5, "GET /helloWorld/1.0.0 HTTP/1.1[\r][\n]" "Host: localhost:8243[\r][\n]" "User-Agent: curl/7.54.0[\r][\n]" "accept: */*[\r][\n]" "Authorization: Bearer 07f6b26d-0f8d-312a-8d38-797e054566cd[\r][\n]" "[\r][\n]", 006 [2019-12-12 17:30:08,105], DEBUG, HTTPS-Listener, dispatcher-1, "GET /v2/5df22aa131000084009a30a9 HTTP/1.1[\r][\n]" "accept: */*[\r][\n]" "Host: www.mocky.io[\r][\n]" "Connection: Keep-Alive[\r][\n]" "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]" "[\r][\n]", 005 [2019-12-12 17:30:08,266], DEBUG, HTTPS-Sender, dispatcher-1, "HTTP/1.1 200 OK[\r][\n]" "Server: Cowboy[\r][\n]" "Connection: keep-alive[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Content-Type: application/json[\r][\n]" "Content-Length: 20[\r][\n]" "Via: 1.1 vegur[\r][\n]" "[\r][\n]" "{ "hello": "world" }", 010 [2019-12-12 17:30:08,282], DEBUG, HTTPS-Listener, dispatcher-5, "HTTP/1.1 200 OK[\r][\n]" "Access-Control-Expose-Headers: [\r][\n]" "Access-Control-Allow-Origin: *[\r][\n]" "Access-Control-Allow-Methods: GET[\r][\n]" "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction,Authorization[\r][\n]" "Content-Type: application/json[\r][\n]" "Via: 1.1 vegur[\r][\n]" "Date: Thu, 12 Dec 2019 12:00:08 GMT[\r][\n]" "Transfer-Encoding: chunked[\r][\n]" "[\r][\n]" "14[\r][\n]" "{ "hello": "world" }[\r][\n]" "0[\r][\n]" "[\r][\n]", 011 Do you have any ideas on how to do this with SPL in the Search App? Thank you for those who took the time to read and reply to me.
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index... See more...
Hello, I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I want them to be stored on the indexers. Can you help me? Thank you.
I need to call a custom function inside another custom function . How to implement it?