All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could try something like this | tstats count where index=_internal OR index=* NOT [search index="_internal" source="*metrics.log*" group=tcpin_connections | stats count by hostname | rename host... See more...
You could try something like this | tstats count where index=_internal OR index=* NOT [search index="_internal" source="*metrics.log*" group=tcpin_connections | stats count by hostname | rename hostname as host | table host] BY host
Try something like this index="aws-apigateway" source1="rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl/prod" | spath | spath input=event | table path responsetime  You may not need ... See more...
Try something like this index="aws-apigateway" source1="rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl/prod" | spath | spath input=event | table path responsetime  You may not need the first spath command if your ingestion path already recognises JSON data format.
Below  query is giving the 3000 events like that, how can i make this command work for that. can please give the straight command. index="aws-apigateway" source1="rkedgevil-restapi-Access-Logs:API-G... See more...
Below  query is giving the 3000 events like that, how can i make this command work for that. can please give the straight command. index="aws-apigateway" source1="rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl/prod"
Below  query is giving the 3000 events like that, how can i make this command work for that. can please give the straight command. index="aws-apigateway" source1="rkedgevil-restapi-Access-Logs:API-G... See more...
Below  query is giving the 3000 events like that, how can i make this command work for that. can please give the straight command. index="aws-apigateway" source1="rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl/prod"
| makeresults | eval _raw="{\"time\": 1722582494370,\"host1\": \"arn:aws:firehose:ca-central-1:2222222:deliverystream/Splunk-Kinesis-apigateway-CA\",\"source1\": \"rkedgevil-restapi-Access-Logs:API-... See more...
| makeresults | eval _raw="{\"time\": 1722582494370,\"host1\": \"arn:aws:firehose:ca-central-1:2222222:deliverystream/Splunk-Kinesis-apigateway-CA\",\"source1\": \"rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl6e/prod\",\"event\": \"{ \\\"requestId\\\":\\\"d85fa529-3979-44a3-9018-21f81e12eafd\\\", \\\"ip\\\": \\\"40.82.191.190\\\", \\\"caller\\\":\\\"-\\\", \\\"user\\\":\\\"-\\\",\\\"requestTime\\\":\\\"02/Aug/2024:07:08:14 +0000\\\", \\\"httpMethod\\\":\\\"POST\\\",\\\"resourcePath\\\":\\\"/{proxy+}\\\", \\\"status\\\":\\\"200\\\",\\\"protocol\\\":\\\"HTTP/1.1\\\", \\\"responseLength\\\":\\\"573\\\", \\\"clientCertIssuerDN\\\":\\\"C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Organization Validation Secure Server CA\\\", \\\"clientCertSerialNumber\\\":\\\"22210811239199552309700144370732535146\\\", \\\"clientCertNotBefore\\\":\\\"Jan 22 00:00:00 2024 GMT\\\", \\\"clientCertNotAfter\\\":\\\"Jan 21 23:59:59 2025 GMT\\\", \\\"path\\\":\\\"/rkedgeapp/provider/dental/keysearch/\\\", \\\"responsetime\\\":\\\"156\\\" }\"}" ``` the line above recreate your sample event ``` | spath | spath input=event | table path responsetime
Here is the my output data. i want to create a table for path and responsetime . can you please help. Expecting output is below: path                                                                ... See more...
Here is the my output data. i want to create a table for path and responsetime . can you please help. Expecting output is below: path                                                                                     responsetime /rkedgeapp/provider/dental/keysearch/           156   {"time": 1722582494370,"host1": "arn:aws:firehose:ca-central-1:2222222:deliverystream/Splunk-Kinesis-apigateway-CA","source1": "rkedgevil-restapi-Access-Logs:API-Gateway-Access-Logs_8o2y6hzl6e/prod","event": "{ \"requestId\":\"d85fa529-3979-44a3-9018-21f81e12eafd\", \"ip\": \"40.82.191.190\", \"caller\":\"-\", \"user\":\"-\",\"requestTime\":\"02/Aug/2024:07:08:14 +0000\", \"httpMethod\":\"POST\",\"resourcePath\":\"/{proxy+}\", \"status\":\"200\",\"protocol\":\"HTTP/1.1\", \"responseLength\":\"573\", \"clientCertIssuerDN\":\"C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Organization Validation Secure Server CA\", \"clientCertSerialNumber\":\"22210811239199552309700144370732535146\", \"clientCertNotBefore\":\"Jan 22 00:00:00 2024 GMT\", \"clientCertNotAfter\":\"Jan 21 23:59:59 2025 GMT\", \"path\":\"/rkedgeapp/provider/dental/keysearch/\", \"responsetime\":\"156\" }"}
Thank you so much!
I am not sure what you mean - I haven't studied for any exam, I just use my experience to solve problems - having said that, it depends on what is meant by "recognize transactions". Solving problems ... See more...
I am not sure what you mean - I haven't studied for any exam, I just use my experience to solve problems - having said that, it depends on what is meant by "recognize transactions". Solving problems in Splunk often involves understanding the data, and recognising where patterns exist, then telling Splunk how to find those patterns. As I said, this can often be done in multiple ways. To learn new commands, if I don't have the data to try them out on, there are some free data sources, such as the Buttercup Games tutorial data set, or I often just use the makeresults command or the gentimes command.
I have installed AppDynamics locally with a private Synthetic server + PSA Agent in the same host and Controller, EUM in separate hosts. Now my issue is that whenever I try to schedule a job in App... See more...
I have installed AppDynamics locally with a private Synthetic server + PSA Agent in the same host and Controller, EUM in separate hosts. Now my issue is that whenever I try to schedule a job in AppDynamics in the User Experience dashboard the agent hits the provided URL with some response. It is displayed on the session tab but only one hit is being registered and showed despite scheduling the job to run in every minute.  We have looked into the scheduler-shepherd logs where we're getting below logs: INFO 2024-08-01 10:13:24,121 SyntheticBackgroundScheduler_Worker-16 JobRunShell Job background-job.com.appdynamics.synthetic.scheduler.core.tasks.ClusterWatcher threw a JobExecutionException: ! java.io.IOException: error=2, No such file or directory ! at java.base/java.lang.ProcessImpl.forkAndExec(Native Method) ! at java.base/java.lang.ProcessImpl.<init>(ProcessImpl.java:314) ! at java.base/java.lang.ProcessImpl.start(ProcessImpl.java:244) ! at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1110) ! ... 12 common frames omitted ! Causing: java.io.IOException: Cannot run program "jps" (in directory "."): error=2, No such file or directory ! at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1143) ! at java.base/java.lang.ProcessBuilder.start(ProcessBuilder.java:1073) ! at java.base/java.lang.Runtime.exec(Runtime.java:594) ! at org.apache.commons.exec.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:58) ! at org.apache.commons.exec.DefaultExecutor.launch(DefaultExecutor.java:254) ! at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:319) ! at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:160) ! at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:147) ! at com.appdynamics.synthetic.cluster.local.PSCluster.findAllInstanceIds(PSCluster.java:58) ! ... 4 common frames omitted ! Causing: com.appdynamics.synthetic.cluster.Cluster$ClusterException: Failed to query active schedulers. ! at com.appdynamics.synthetic.cluster.local.PSCluster.findAllInstanceIds(PSCluster.java:67) ! at com.appdynamics.synthetic.cluster.local.PSCluster.findHealthyInstanceIds(PSCluster.java:46) ! at com.appdynamics.synthetic.scheduler.core.tasks.ClusterWatcher.execute(ClusterWatcher.java:49) ! ... 2 common frames omitted ! Causing: org.quartz.JobExecutionException: Exception while attempting to find healthy instances ! at com.appdynamics.synthetic.scheduler.core.tasks.ClusterWatcher.execute(ClusterWatcher.java:52) ! at org.quartz.core.JobRunShell.run(JobRunShell.java:202) ! at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) WARN 2024-08-01 10:13:27,484 dw-583 - PUT /v1/schedule/02a54278-0b5f-4a0e-95f1-9299cd9b4dbc spi idx-requestId=96a2de91531b226a, idx-account=********* javax.persistence.spi::No valid providers found. INFO 2024-08-01 10:13:27,502 dw-583 - PUT /v1/schedule/02a54278-0b5f-4a0e-95f1-9299cd9b4dbc LicenseHelper idx-requestId=96a2de91531b226a, idx-scheduleId=02a54278-0b5f-4a0e-95f1-9299cd9b4dbc, idx-appKey=*********, idx-account=********* Job with id 02a54278-0b5f-4a0e-95f1-9299cd9b4dbc and description http://172.16.21.70 previously used 1 units of license INFO 2024-08-01 10:13:27,502 dw-583 - PUT /v1/schedule/02a54278-0b5f-4a0e-95f1-9299cd9b4dbc LicenseHelper idx-requestId=96a2de91531b226a, idx-scheduleId=02a54278-0b5f-4a0e-95f1-9299cd9b4dbc, idx-appKey=*********, idx-account=********* Job with id 02a54278-0b5f-4a0e-95f1-9299cd9b4dbc and description http://172.16.21.70 now has 1 pages and 1 locations and requires 1 units of license INFO 2024-08-01 10:13:27,505 hystrix-SchedulerGroup-13 SynthJobRunner Will run 1 measurement(s) for job=Schedule{account='*********', appKey='*********', id='02a54278-0b5f-4a0e-95f1-9299cd9b4dbc', description='http://172.16.21.70'} INFO 2024-08-01 10:13:27,505 hystrix-SchedulerGroup-13 SynthJobRunner Requesting measurement for job=Schedule{account='*********', appKey='*********', id='02a54278-0b5f-4a0e-95f1-9299cd9b4dbc', description='http://172.16.21.70'}, location=NOD:*********, browser=IE11 INFO 2024-08-01 10:13:27,512 hystrix-SchedulerGroup-13 SynthJobRunner Updating state for schedule WARN 2024-08-01 10:13:33,178 okhttp-eventsource-stream-[]-0 DataSource Error in stream connection (will retry): java.net.UnknownHostException: stream.launchdarkly.com: Name or service not known INFO 2024-08-01 10:13:33,178 okhttp-eventsource-stream-[]-0 DataSource Waiting 22193 milliseconds before reconnecting... WARN 2024-08-01 10:13:33,178 okhttp-eventsource-events-[]-0 DataSource Encountered EventSource error: java.net.UnknownHostException: stream.launchdarkly.com: Name or service not known WARN 2024-08-01 10:13:40,054 hz.epic_ramanujan.cached.thread-9 MulticastService [172.18.0.1]:5701 [dev] [5.3.2] Sending multicast datagram failed. Exception message saying the operation is not permitted usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time. ! java.net.SocketException: Message too long ! at java.base/sun.nio.ch.DatagramChannelImpl.send0(Native Method) ! at java.base/sun.nio.ch.DatagramChannelImpl.sendFromNativeBuffer(DatagramChannelImpl.java:901) ! at java.base/sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:863) ! at java.base/sun.nio.ch.DatagramChannelImpl.send(DatagramChannelImpl.java:821) ! at java.base/sun.nio.ch.DatagramChannelImpl.blockingSend(DatagramChannelImpl.java:853) ! at java.base/sun.nio.ch.DatagramSocketAdaptor.send(DatagramSocketAdaptor.java:218) ! at java.base/java.net.DatagramSocket.send(DatagramSocket.java:664) ! at com.hazelcast.internal.cluster.impl.MulticastService.send(MulticastService.java:309) ! at com.hazelcast.internal.cluster.impl.MulticastJoiner.searchForOtherClusters(MulticastJoiner.java:113) ! at com.hazelcast.internal.cluster.impl.SplitBrainHandler.searchForOtherClusters(SplitBrainHandler.java:75) ! at com.hazelcast.internal.cluster.impl.SplitBrainHandler.run(SplitBrainHandler.java:42) ! at com.hazelcast.spi.impl.executionservice.impl.DelegateAndSkipOnConcurrentExecutionDecorator$DelegateDecorator.run(DelegateAndSkipOnConcurrentExecutionDecorator.java:77) ! at com.hazelcast.internal.util.executor.CachedExecutorServiceDelegate$Worker.run(CachedExecutorServiceDelegate.java:217) ! at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) ! at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ! at java.base/java.lang.Thread.run(Thread.java:840) ! at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76) ! at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:111) INFO 2024-08-01 10:13:40,409 Account Credential Cache-1 AccountCredentialCache Refreshing account credential cache INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Retrieved Accounts from EUMAccountResponse. Account list size: 1 Date: 2024-08-01T15:43:41.520+05:30 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 1} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 1 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 0 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 0 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading app to AccountAndAppContainer. Total number of accounts: 0 INFO 2024-08-01 10:13:41,520 Account App Cache-1 AccountAppCache Start loading account to AccountContainer. Total number of account: 0} WARN 2024-08-01 10:13:42,479 hz.awesome_ramanujan.cached.thread-1 MulticastService [172.18.0.1]:5702 [dev] [5.3.2] Sending multicast datagram failed. Exception message saying the operation is not permitted usually means the underlying OS is not able to send packets at a given pace. It can be caused by starting several hazelcast members in parallel when the members send their join message nearly at the same time.   Please look into the above ERROR logs and kindly assist. I appreciate your quick response!!   Thanks
Hello, thank you very much for your reply. I am preparing for the splunk core certified power user exam. When I look at the syllabus, the first lesson in the third section is to recognize transaction... See more...
Hello, thank you very much for your reply. I am preparing for the splunk core certified power user exam. When I look at the syllabus, the first lesson in the third section is to recognize transactions, but the second lesson is : Group events using fields. I'm confused at this point, frankly. Because when I wanted to teach the lesson from artificial intelligence platforms, there was nothing about the transaction. As you said, the stats command comes up. Is this correct then?
The count is just used in my dashboard view and will be removed in initial query.  
OK Assuming there are no more typos in your examples, try something like this <my_search_index> | spath uri | regex uri="\/vehicle\/orders\/v1(|.*\/processInsurance|\/.*\/validate|\/.*\/validateInsu... See more...
OK Assuming there are no more typos in your examples, try something like this <my_search_index> | spath uri | regex uri="\/vehicle\/orders\/v1(|.*\/processInsurance|\/.*\/validate|\/.*\/validateInsurance|\/.*\/process|\/([^-]+-){4}[^-]+)$" | eval Operations=case( match(uri,"/vehicle/orders/v1/.*/processInsurance"),"processInsurance", match(uri,"/vehicle/orders/v1/.*/validateInsurance"),"validateInsurance", match(uri,"/vehicle/orders/v1/.*/validate"),"validateOrder", match(uri,"/vehicle/orders/v1/.*/process"),"processOrder", match(uri,"/vehicle/orders/v1/[^-]*-[^-]*-[^-]*-[^-]*-[^-]*"),"getOrder", match(uri,"/vehicle/orders/v1"),"createOrder") | stats count as hits avg(request_time) as average perc90(request_time) as response90 by Operations | eval average=round(average,2),response90=round(response90,2)
Good day, I am pretty new to Splunk and want a way to join two queries together. Query 1 - Gives me all of my assets | tstats count where index=_internal OR index=* BY host Query 2 - Give ... See more...
Good day, I am pretty new to Splunk and want a way to join two queries together. Query 1 - Gives me all of my assets | tstats count where index=_internal OR index=* BY host Query 2 - Give me all of my devices that ingest into the forwarder index="_internal" source="*metrics.log*" group=tcpin_connections | dedup hostname | table date_hour, date_minute, date_mday, date_month, date_year, hostname, sourceIp, fwdType ,guid ,version ,build ,os ,arch | stats count How can I join this to create a query that will find all my devices (query1) and check if they have the forwarder installed(query2) and show me the results of devices that are not in query 2?
Hello, thank you very much for your reply. I am preparing for the splunk core certified power user exam. When I look at the syllabus, the third section is as follows: Section 3: Correlating Events ... See more...
Hello, thank you very much for your reply. I am preparing for the splunk core certified power user exam. When I look at the syllabus, the third section is as follows: Section 3: Correlating Events Lecture 1: Identify transactions Lecture 2: Group events using fields Lecture 3: Group events using fields and time Lecture 4: Search with transactions Lecture 5: Report on transactions Lecture 6: Determine when to use transactions vs. stats I looked at the defining transactions part, I understood this place, but then when I chose to have artificial intelligence tools explain the group events using fields lesson as the second lesson, as you said, it tells the stats command etc. commands. It does not mention Transaction. Is that right then?
Hello, thank you very much for your answer. I am preparing for the Splunk Core Certified Power User certification exam and when I look at the syllabus, the following topics are included in Chapter 3:... See more...
Hello, thank you very much for your answer. I am preparing for the Splunk Core Certified Power User certification exam and when I look at the syllabus, the following topics are included in Chapter 3: Chapter 3: Association of Events Lesson 1: Defining transactions Lesson 2: Grouping events using fields Lesson 3: Grouping events using space and time Lesson 4: Search with operations Lesson 5: Report on transactions Lesson 6: Determine when to use transactions and statistics I looked at the defining transactions part, I understood this place, but then when I chose to have artificial intelligence tools explain the group events using fields lesson as the second lesson, as you said, it tells the stats command etc. commands. It does not mention Transaction. Is that right then?
I have usually found that the transaction command has limitations and quirks that sometimes loses information or gives unexpected / invalid results. With Splunk, there are often multiple ways to solv... See more...
I have usually found that the transaction command has limitations and quirks that sometimes loses information or gives unexpected / invalid results. With Splunk, there are often multiple ways to solve a problem and combinations of the stats command and its variants (eventstats and streamstats) usually work in a more predictable fashion. This does depend on your usecase. If you could provide more detail on what you are trying to achieve, perhaps we could come up with a solution.
Hello, How can I use transaction to Group events using fields and Group events using fields and time? I am new to splunk and I am preparing for the Splunk Core Certified Power User certification exa... See more...
Hello, How can I use transaction to Group events using fields and Group events using fields and time? I am new to splunk and I am preparing for the Splunk Core Certified Power User certification exam. I would be very happy if there is a resource where I can get comprehensive information. Thank you!
Hi @phanikumarcs , as I said, define your Use Cases, then you could create your searches. e.g. you could create an alert for the queues: index=_internal source=*metrics.log sourcetype=splunkd gro... See more...
Hi @phanikumarcs , as I said, define your Use Cases, then you could create your searches. e.g. you could create an alert for the queues: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time then you could check the disk space, or whtelse you like. Anyway: define your Use Cases Ciao. Giuseppe
Yes, but we have to setup a alert for example if any issues will get based on warns or errors it will trigger  
Hi @phanikumarcs , I'd simplify your search to search eventual erros: index=_internal hostname="*hf*" whot do you want to monitor? Ciao. Giuseppe