All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two searches  and I only want to find rows which has common MessageID . Currently it is returning extra row because of second search .  Query before Or is returning 100 records  and after OR ... See more...
I have two searches  and I only want to find rows which has common MessageID . Currently it is returning extra row because of second search .  Query before Or is returning 100 records  and after OR one was returning 110 rows  and for those extra 10 rows messageID in first is NULL , So I want to drop those messages . Please help how can i  change this query to make it work . I am trying to find count of matched IDs and  list of all such ids  ```query for apigateway call``` (index=aws_np earliest="03/28/2025:13:30:00" latest="03/28/2025:14:35:00" Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias ="XXXX" network_environment=xxXXX source="API-Gateway-Execution-Logs*" (application="xXXXXX" OR application="xXXXX-xXX") | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageID" output=messageID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 and action="Create" ` | rename _time as request_time | table messageID, request_time) | append ```query for 2nd query call``` [ search kubernetes_cluster="eks-XXX*" index="aws_XXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageID" output=messageID | rename _time as pubsub_time | table messageID, pubsub_time ] | stats values(request_time) as request_time values(pubsub_time) as pubsub_time by messageID      
Hi everyone, I'm seeking advice on the best way to send application logs from our client's Docker containers into a Splunk Cloud instance, and I’d appreciate your input and experiences. Currently, ... See more...
Hi everyone, I'm seeking advice on the best way to send application logs from our client's Docker containers into a Splunk Cloud instance, and I’d appreciate your input and experiences. Currently, my leading approach involves using Docker’s "Splunk logging driver" to forward data via the HEC. However, my understanding is that this method primarily sends container-level data rather than detailed application logs. Another method I came across involves deploying Splunk's Docker image to create a standalone Enterprise container alongside the Universal Forwarder. The idea here is to set up monitors in the forwarder's inputs.conf to send data to the Enterprise instance and then route it via a Heavy Forwarder to Splunk Cloud. Has anyone successfully implemented either of these approaches—or perhaps a different method—to ingest application logs from Docker containers into Splunk Cloud? Any insights, tips, or shared experiences would be greatly appreciated. Thanks in advance for your help! Cheers,
Hello Team - I have a strange use case wherein while invoking Splunk cloud REST APIs via Python SDK , only for one endpoint /services/apps/local I am receiving 200 response however for any other endp... See more...
Hello Team - I have a strange use case wherein while invoking Splunk cloud REST APIs via Python SDK , only for one endpoint /services/apps/local I am receiving 200 response however for any other endpoint such as /services/server/info or /services/search/jobs - I get connection timeout.  While debugging I approached Splunk's internal logs (using index = _internal),  I found that for the request made through client I see an entry in access logs with 200/201 http code but not sure why would it result into connection time out[Err 110] as if the client kept on waiting to receive the response from server and at the end gave up. I tried increasing timeout value on client side as well yet no luck   I don't think reachability is an issue here as /services/apps/local endpoint on 8089 port is accessible and for other endpoints too , there are log traces on Splunk cloud side as aforesaid so what could be an issue here ?  Search query is also extremely simple -  search index=_internal | stats count by sourcetype   Please help. 
Hello Splunk Community, I need to find out how many upgrades were performed to systems and unsure how to best proceed. The data is similar to what is listed below: _time hostname system model... See more...
Hello Splunk Community, I need to find out how many upgrades were performed to systems and unsure how to best proceed. The data is similar to what is listed below: _time hostname system model version 2025-01-01 a x x 15.2(8) 2025-01-01 b y y 15.3(5) 2025-01-02 a x x 15.3(5)   There are thousands of systems with various versions. I am trying to find a way to capture devices that have gone from one version to a newer one indicating an upgrade took place. Multiple upgrades could have occurred over time for a single device and those need to be accounted for as well. Any help suggesting where to start looking into what to use would be greatly appreciated. Thanks. -E
I'm ingesting data into Splunk via the HTTP Event Collector (HEC), but the data is wrapped inside a "data" key instead of "event". Splunk expects events inside the "event" key, and I'm getting the er... See more...
I'm ingesting data into Splunk via the HTTP Event Collector (HEC), but the data is wrapped inside a "data" key instead of "event". Splunk expects events inside the "event" key, and I'm getting the error:   Failed to send data: {"text":"No data","code":5}   Here’s an example of the data I’m sending:  { "data": { "timestamp": "2025-04-01T19:51:07.720Z", "userId": "", "userAgent": "Visual Studio Code/1.98.2 (Continue/1.0.5)", "selectedProfileId": "local", "eventName": "chatFeedback", "schema": "0.2.0", "prompt": "|>\n", "completion": "Sample completion text", "modelTitle": "Llama", "feedback": true, "sessionId": "c36c18eb-25e6-4448-b9b5-a50cdd2a0baa" } index="test" sourcetype="test:json" source="telemetry" } How can I transform incoming HEC data so that "data" is treated as "event" in Splunk? Is there a better way to handle this at the Splunk configuration level? Thanks in advance for any help! @ITWhisperer
Would it be possible to get Slack add-on for Splunk (4986) ver 2.0.2  submitted to be cloud vetted and be compatible for cloud customers? If so, I would like to make the request. The current compati... See more...
Would it be possible to get Slack add-on for Splunk (4986) ver 2.0.2  submitted to be cloud vetted and be compatible for cloud customers? If so, I would like to make the request. The current compatibility for the app as listed on Splunkbase only lists Splunk Enterprise. As such,  it does not show up as a viable app to be installed for cloud customers and Splunk support cannot install the app as is not validated to work on Splunk cloud stacks. https://splunkbase.splunk.com/app/4986.
I'm getting thousands of log events that says -- ERROR CMSlave [2549383 CMNotifyThread] - Cannot find bid=wineventlog~157~96ECF7C4-1951-4288-B90A-9133E5408F14. cleaning up usage data It is on all m... See more...
I'm getting thousands of log events that says -- ERROR CMSlave [2549383 CMNotifyThread] - Cannot find bid=wineventlog~157~96ECF7C4-1951-4288-B90A-9133E5408F14. cleaning up usage data It is on all my indexers and references multiple but not all indexes.  Any ideas on how to fix that error?
There is option add a field to an existing kvstore without edit conf files? I dont own the server so it be It's difficult to get there all the time.
Hi Team  Is it possible to switch the dashboard after a regular interval in the same app ?  I've around 15 dashboards in the same app and i want to switch to next dashboard after every 2 mins.  ... See more...
Hi Team  Is it possible to switch the dashboard after a regular interval in the same app ?  I've around 15 dashboards in the same app and i want to switch to next dashboard after every 2 mins.  In the above attached screenshot , i have around 15 dashboards and the home screen is always "ESES Hotline SUMMARY"  dashboard.  Is it possible to move automatically to next dashboard "ESES Hotline" after 2 mins and then move automatically to next dashboard "EVIS Application" after 2 mins and so on.     
Hi Team  Can you please let me know how it is possible to fetch the events with the time greater than the time of the 1st event in the dashboard.  Example: I've 3 jobs executed every day at around ... See more...
Hi Team  Can you please let me know how it is possible to fetch the events with the time greater than the time of the 1st event in the dashboard.  Example: I've 3 jobs executed every day at around below timings:  Job1 : Around 10 PM  ( Day D)  Job2 : Around 3 AM ( Day D + 1) Job3 : Around 6 AM ( Day D + 1) I am fetching the latest of the Job1/Job2/Job3 to show in the dashboard and want the result in the below format.  If we are after 5 PM - 10 PM ,  Job1 : PLANNED  Job2 : PLANNED  Job3 : PLANNED  If we are at 11 PM ,  Job1 : Executed at 10:00  Job2 : PLANNED  Job3 : PLANNED  If we are 4 AM ,  Job1 : Executed at 10:00  Job2 : Executed at  03:00 Job3 : PLANNED  If we are 7 AM ,  Job1 : Executed at 10:00  Job2 : Executed at  03:00 Job3 : Executed at  06:00  If we are 4 PM ,  Job1 : PLANNED  Job2 : PLANNED  Job3 : PLANNED  If we are at 5 PM ,  Job1 : PLANNED  Job2 : PLANNED  Job3 : PLANNED  We want to consider the start of day at 5 PM and end at next day at 5 PM instead of using last 24 hours / today.   
Hi i have a below table  i have been trying to represent it in a heat map i can see the percentage value in the blocks but how can i get the severity values => very high,high medium represented ... See more...
Hi i have a below table  i have been trying to represent it in a heat map i can see the percentage value in the blocks but how can i get the severity values => very high,high medium represented in the blocks     
Hi Please assist how to build Splunk deployment servers clustering with minimum requirement. 
Hi Splunkers, today I have the following issue: on our SHC, there is a small app subset that is managed, and so modified, from their user directly on SHs. What does it means for us? Of course, that ... See more...
Hi Splunkers, today I have the following issue: on our SHC, there is a small app subset that is managed, and so modified, from their user directly on SHs. What does it means for us? Of course, that we need to perform version updates from SH to Deployer before perform a new app bundle push. Otherwise, older version on Deployer will override the updated one on SH. My wondering is: is there any way, on Splunk version 9.2.1, to avoid this app update when Deployer is used?  The final purpose, just to make an example, is: ehy, if deployer has 100 apps in $SPLUNK_HOME$/etc/shcluster/apps, we want that on a bundle push, 95 of them must be updated, if SH version is NOT equal to Deployer one; remaining 5, should not be updated by Deployer.
We have installed Akamai add-on (https://splunkbase.splunk.com/app/4310) on our HF and installed Java and configured data input in HF by creating index in HF just for dropdown purpose and create the ... See more...
We have installed Akamai add-on (https://splunkbase.splunk.com/app/4310) on our HF and installed Java and configured data input in HF by creating index in HF just for dropdown purpose and create the same index in CM and pushed to indexers. But we are not receiving any data now.  When we are checking in splunkd.log below: 04-02-2025 11:08:27.529 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, begin streamEvents 04-02-2025 11:08:27.646 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, inputName=TA-Akamai_SIEM://WAF_AKAMAI_SIEM_DEV 04-02-2025 11:08:27.646 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, inputName(String)=TA-Akamai_SIEM://WAF_AKAMAI_SIEM_DEV 04-02-2025 11:08:27.653 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg streamEvents Service connect to Akamai_SIEM App... 04-02-2025 11:08:27.900 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Processing Data... 04-02-2025 11:08:27.900 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=KV Service get... 04-02-2025 11:08:27.902 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data... 04-02-2025 11:08:27.946 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 04-02-2025 11:08:27.946 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" urlToRequest=https://akab-hg3zdmaay4bq4n5w-ljwg5vtmjxs5ukg2.luna.akamaiapis.net/siem/v1/configs/108115;107918?offset=fd2ba;oj2ETReWQtqhoYX8yuFwqtycwtzWgKUIa_hXJeP06170pYL_XCOdDTR_8u7mXpcuzAfAbBrlVyYQpgwhoHKPYpRQL4dWnY7TENjBhJv0WlUKy1oaCxYa_dEz5w68Rf4RKLqk&limit=150000 04-02-2025 11:08:28.820 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" status code=200 04-02-2025 11:08:28.822 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" awaiting shutdown... 04-02-2025 11:08:28.850 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" found new offset: fd2ba;-kKV2wsV1oLesFFgkhv-dUAfVlC09trNuJWPKUOI8wCVnPWtwMjhld_MIgN84uv9OcFL6Fq5EwOs-wwKHLC1hUDvjBAhG7ZeROQ4kxLcdDwYSFhmF_iTYqmW8EE26VWd9cW1 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" termination complete.... 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Cores: 8 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Consumer CPU time: 0.03 s 04-02-2025 11:08:28.851 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" EdgeGrid time: 0.88 s 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Real time: 1.21 s 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Consumer CPU utilization: 14.15% 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" Lines Processed: 1 04-02-2025 11:08:28.852 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=KV Service get... 04-02-2025 11:08:28.854 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data... 04-02-2025 11:08:28.855 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg=Parse KVstore data...Complete 04-02-2025 11:08:28.870 +0000 INFO ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" infoMsg = streamEvents, end streamEvents 04-02-2025 11:08:28.870 +0000 ERROR ExecProcessor [8927 ExecProcessor] - message from "/opt/splunk/etc/apps/TA-Akamai_SIEM/linux_x86_64/bin/TA-Akamai_SIEM.sh" javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 Not sure what are these errors are but when we are checking with index=<created index> in SH no data is showing. Please help me in this case. Even installed this add-on in deployer by removing inputs.conf and pushed to SHs as it has props and transforms to be performed at search time.
when running my bamboo paln i am unable to generate splunk log json file  this is log  build 02-Apr-2025 11:57:27 /home/bamboo-agent/xml-data/build-dir/CBPPOC-SLPIB-JOB1/dbscripts build 02-Apr-2025... See more...
when running my bamboo paln i am unable to generate splunk log json file  this is log  build 02-Apr-2025 11:57:27 /home/bamboo-agent/xml-data/build-dir/CBPPOC-SLPIB-JOB1/dbscripts build 02-Apr-2025 11:57:27 _bamboo_build.sh build 02-Apr-2025 11:57:27 _build.sh build 02-Apr-2025 11:57:27 licensecode.py build 02-Apr-2025 11:57:27 _push2release.sh build 02-Apr-2025 11:57:27 _push2snapshot.sh build 02-Apr-2025 11:57:27 splunkQueries.txt build 02-Apr-2025 11:57:28 [licensecode.py:43 - login() ] Logging in to Splunk API initiated build 02-Apr-2025 11:57:28 [licensecode.py:62 - login() ] Logged in as: M022754 build 02-Apr-2025 11:57:28 [licensecode.py:257 - main() ] Command line param queryFile has value: splunkQueries.txt build 02-Apr-2025 11:57:28 [licensecode.py:159 - processQueryFile() ] Query: search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d build 02-Apr-2025 11:57:28 [licensecode.py:161 - processQueryFile() ] Number of queries in queue: 1 build 02-Apr-2025 11:57:28 [licensecode.py:193 - triggerSearch() ] Triggering search for query: search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d build 02-Apr-2025 11:57:28 [licensecode.py:201 - triggerSearch() ] Search initiated with SID: 1743587848.2389422_84B919DD-8E60-47EE-AF06-F6EE20B95178 build 02-Apr-2025 11:57:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:57:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:57:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:58:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 11:59:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:08 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:18 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:28 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:38 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:48 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:58 [licensecode.py:265 - main() ] Waiting next 10 seconds for all queries to complete build 02-Apr-2025 12:00:58 [licensecode.py:268 - main() ] Execution timeout of 200 seconds has passed, exiting simple 02-Apr-2025 12:00:58 Failing task since return code of [/home/bamboo-agent/temp/CBPPOC-SLPIB-JOB1-268-ScriptBuildTask-11226880290426353947.sh] was 1 while expected 0 02-Apr-2025 12:00:58 Failing as no matching files has been found and empty artifacts are not allowed. after completing waiting time logs file json not generating help me to how to resolve this issue  
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the statu... See more...
Hello everyone,    I need help with determining the time needed from an analyst to investigate the alert and close it .  for more clarity I want to calculate the time spent from when the status_label field value updated from In Progress to (Closed or Resolved ) Hence that the default value of this field in New.  I am new at splunk so please write full query and I will adjust it for my needs. 
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs a... See more...
Hi, there, I'm simplifying the context: We've had a perfectly working correlation rule for several years now, and for the past 2 days it hasn't been working properly. The command has to list IPs and then check if these IPs are not in a first lookup and then in a second lookup. If the IPs are not in either lookup, an alert is triggered. The IPs are then added to the second lookup, so that they can be ignored for future searches. It looks like this: <my search> | dedup ip | search NOT [ | inputlookup 1.csv ] | search NOT [ | inputlookup 2.csv ] | fields ip | outputlookup append=true override_if_empty=false 2.csv The lookups are both identical: IP ------- 1.1.1.1 2.2.2.2 etc The first lookup has 1000 lines The second lookup has 55000 lines Everything was working fine, but now we have IPs that are triggering alerts despite being in the second lookup. Any ideas? Thanks a lot.        
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process ... See more...
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process with the best practice to upgrade it.
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source,... See more...
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | bucket _time span=1m | stats count by source, _time i get below result for one source "name" ,second source by address,third source by city . How can i represent different api source with per minute in good understandable format...either graph or pictorial representation source _time count        data/name 2025-03-02 08:13:00 2   data/name 2025-03-02 08:14:00 57   data/name 2025-03-02 08:15:00 347   data/name 2025-03-02 08:16:00 62   data/name 2025-03-02 08:17:00 48     data/address 2025-03-02 08:18:00 21   data/city 2025-03-02 08:19:00 66   data/city 2025-03-02 08:20:00 55   data/address 2025-03-02 08:21:00 7   name event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"name","client":"Ha2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0} address event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"address","client":"Harggg2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0}
Hello Has anyone encountered the situation of incomplete log transmission using UDP 514? Would changing to TCP be useful? I would appreciate your support. greetings