All Topics

Top

All Topics

Hello All, How to create dependent dropdown based on saved search I am using a saved search but when I add: |search command, then wont work. Please suggest.   Thanks
I created a lookup table for blacklisted DNS queries. I need a query that uses the lookup table to see if domains in the lookup table are present in events in my environment. 
Hello all, please could you help me with one question - it is possible to add an png image on a rectangle square? Just as an example the rectangle is set like this - it is possible to include there... See more...
Hello all, please could you help me with one question - it is possible to add an png image on a rectangle square? Just as an example the rectangle is set like this - it is possible to include there an image to the corner of the rectangle? <a href=""> <g> <rect style=fill:color_grey width="150" height="90" x=1200 y=200/> </g> </a>   Thank you for any help and answers.
I want to add three fields insert ,update and error then subtract it from count_carmen and add new row .
Hi Experts, I would like rename sourcetype at index time with below config. props.conf [source::test/source.txt] TRANSFORMS-sourcetype = newsourcetype Transforms.conf [newsourcetype] SO... See more...
Hi Experts, I would like rename sourcetype at index time with below config. props.conf [source::test/source.txt] TRANSFORMS-sourcetype = newsourcetype Transforms.conf [newsourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = regex to match existing sourcetype FORMAT = newsourcetype DEST_KEY = MetaData:Sourcetype   Now I would like apply below settings on new sourcetype.  [newsourcetype] TZ= Linebreaker= Truncate= etc.. will it work this way ? Please let me know.   Thanks. Ram    
we have a data of 14k events under event index, which is unstructured. I'm trying to ingest this data under a metric index at search time using mcollect command and was able to convert the event logs... See more...
we have a data of 14k events under event index, which is unstructured. I'm trying to ingest this data under a metric index at search time using mcollect command and was able to convert the event logs to metrics. As per the splunk docs, it states metric index is optimized for the storage and retrieval of metric data. While there is improvement in the search time, the storage size instead of decreasing it drastically increased. How does the storage is optimized incase of metric index? Is there any additional configuration that needs to e setup. I have updated the always_use_single_value_output for mcollect command to false under limits.conf
Hi, I have setup an environment to learn at home. I have 2 instances, one serving as a Splunk Forwarder where I have my data and the other serving as Deployment Server + indexer + search head. I ... See more...
Hi, I have setup an environment to learn at home. I have 2 instances, one serving as a Splunk Forwarder where I have my data and the other serving as Deployment Server + indexer + search head. I configured the serverclass and the app, however I'm not getting data into the index from the forwarder even tho I checked the logs in the latter and the connection is successful. Is it because of the trial license? Any thoughts why is it not working as expected? Any info would be appreciated. Thanks.
Hello, How to query a field in DBXQuery that contains colon?   I ran the following query and got an error.  Thank you  | dbxquery connection=visibility query="select abc:def from tableCompany" or... See more...
Hello, How to query a field in DBXQuery that contains colon?   I ran the following query and got an error.  Thank you  | dbxquery connection=visibility query="select abc:def from tableCompany" org.postgresql.util.PSQLException: ERROR: syntax error at or near ":" Position: I tried to put single quote | dbxquery connection=visibility query="select 'abc:def' from tableCompany" but it gave me the following result ?column? abc:def abc:def
I'm facing a rather peculiar issue with dashboards. When non-admin users, or users without the admin_all_objects capability, access the dashboard, all panels display "Waiting for data..." indefinitel... See more...
I'm facing a rather peculiar issue with dashboards. When non-admin users, or users without the admin_all_objects capability, access the dashboard, all panels display "Waiting for data..." indefinitely. However, the strangest part is that if the user clicks on the search of a panel and is redirected to the search view, the results appear immediately. Here's what I've tried so far: Searched through community questions and issues, but found nothing that matches this issue exactly. Experimented with different capabilities, but it seems only the admin_all_objects capability solves this issue. Attempted to adjust the job limits similar to those set for admin users. Assigning admin_all_objects capability to all users is not a viable solution for me due to security concerns. Has anyone encountered this issue before? I'm running out of ideas and would appreciate any help or insights on this. Note: Tested also on a local instance deployed via ansible-role-for-splunk to reproduce.   Thank you in advance for your time and assistance.
Hello, We are new to the Splunk environment, and are using Enterprise v9.01. We have  complete driver package from CData that allows us to use 100+ different ODBC and JDBC drivers. I tried the Splu... See more...
Hello, We are new to the Splunk environment, and are using Enterprise v9.01. We have  complete driver package from CData that allows us to use 100+ different ODBC and JDBC drivers. I tried the Splunk DB connect add-on and I can connect to a SQL DB. Can Splunk actually make connections to other JDBC/ODBC data sources, MongoDB, Teams, One-note etc from CData. Please let us know.    
Hello, we are a from a software editor integration team and we would like to help our customer to integrate easily our logs in their splunk. So we developped a python script using your samples and ... See more...
Hello, we are a from a software editor integration team and we would like to help our customer to integrate easily our logs in their splunk. So we developped a python script using your samples and our own python script to access our Audit trail API. The current script is working well outside splunk and retrieve our logs/ as soon as there are new indexes and forward the json result to stdout. But as soon as we put it inside Splunk we have "ERROR ExecProcessor" errors which are not very self explanatory. ----------------------------------- 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from... ...bin\scripts\Final-2.py"", line 57, in <module> 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" response = requests.get(url, headers={'Content-Type': 'application/json'}, cert=cert_context, verify = False) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from ... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\api.py", line 76, in get 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" return request('get', url, params=params, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\api.py", line 61, in request 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" return session.request(method=method, url=url, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\sessions.py", line 542, in request 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" resp = self.send(prep, **send_kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\sessions.py", line 655, in send 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" r = adapter.send(request, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\adapters.py", line 416, in send 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" self.cert_verify(conn, request.url, verify, cert) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\adapters.py", line 250, in cert_verify  It seems our script is refused at the line  response = requests.get(url, headers={'Content-Type': 'application/json'}, cert=cert_context, verify = False) We tried with or without verify = False with no clues why its refused. Did you have any ideas about why it's stuck inside Splunk ? (we tried in Linux and in Windows with the same Result) Best regards, TrustBuilder team
Hi All,   For those who are familiar with AWS Cloudtrail logs, these have details about every api call, every event that occurs in your AWS account.  Is there an equivalent  of the same in Azure that... See more...
Hi All,   For those who are familiar with AWS Cloudtrail logs, these have details about every api call, every event that occurs in your AWS account.  Is there an equivalent  of the same in Azure that can be ingested in Splunk ? We have "Splunk Add-on for Microsoft Cloud Services" installed in our environment.    What input or config is required to pull in cloudtrail type equivalent logs ??   As of now,  we are getting compute logs and Azure AD events via this add-on.
The primary data our origination needs to ingest from SNOW is in the form of reports created in SNOW.  This add on does not allow for this ingestion currently, requiring our org to develop our own ap... See more...
The primary data our origination needs to ingest from SNOW is in the form of reports created in SNOW.  This add on does not allow for this ingestion currently, requiring our org to develop our own app that recently broke due to SNOW api update breaking pagination.    Ingestion of incident and request data into Splunk is not overall useful as the need for correlation of all individual log events for specific tickets makes searching or monitoring very difficult.   Is there any plan to add SNOW report ingestion as this would be far more useful data to ingest to Splunk
Hello everyone, I am going crazy trying to figure out why this isn't working.  I have a field called "alert.createdAt" that contains an EPOCH time.  (1693398386408).  I need to convert this to be H... See more...
Hello everyone, I am going crazy trying to figure out why this isn't working.  I have a field called "alert.createdAt" that contains an EPOCH time.  (1693398386408).  I need to convert this to be Human Readable (08/30/2023 09:26:47).  However, when using the strftime, I don't see anything being returned. My Search is: SEARCH | eval c_time=strftime (alert.createdAt,"%m-%d-%Y %H:%M:%S") | table c_time I have been going thru all of the previous solutions I could find, but I can't seem to get this to work.  Is there another way to achieve this, or am I just way off on how I am trying to do this. : ) Thanks for any help, much appreciated Tom        
I have a Dell Equallogic Group Manager  (san server)  that's hasn't been sending logs to syslog.  I've added all the IPs for the server, pinged and did traceroute for them with no issues, yet logs ar... See more...
I have a Dell Equallogic Group Manager  (san server)  that's hasn't been sending logs to syslog.  I've added all the IPs for the server, pinged and did traceroute for them with no issues, yet logs are still not sending.  Anyone have a solution?   Thanks
Hello, we have a large multi monitor screen at the front of the floor plate, We have our splunk dashboards showing however we have different dashboards that periodically need to be shown. Is th... See more...
Hello, we have a large multi monitor screen at the front of the floor plate, We have our splunk dashboards showing however we have different dashboards that periodically need to be shown. Is there a way to automatically go through each of these URLs or dashboards to view each individual dhasboard instead of doing it manually. Many thanks
Hello, I've just updated my Splunk Security Essentials application from 3.7 to 3.7.1. After the update, the dashboard in Content>Manage bookmarks show the following error: Anyone seen that bef... See more...
Hello, I've just updated my Splunk Security Essentials application from 3.7 to 3.7.1. After the update, the dashboard in Content>Manage bookmarks show the following error: Anyone seen that before? Thanks.
We have already enabled the Splunk logging driver,  but this forwards logs from inside the containers.   I want to capture the docker system-level events, as you would see from this command:     ... See more...
We have already enabled the Splunk logging driver,  but this forwards logs from inside the containers.   I want to capture the docker system-level events, as you would see from this command:     docker events --filter event=stop --since '60m'     https://docs.docker.com/engine/reference/commandline/system_events/   I see this app (not approved for cloud).   Are there any other options? https://splunkbase.splunk.com/app/6113 https://github.com/quzen/docker_analyzer/blob/main/bin/docker_events.py
Here is a sample of my data.  I want to separate each hours/min/sec since I have no timestamp I'm unable to make it work.  I get the first few to break, but then it goes back to breaking incorrectly.... See more...
Here is a sample of my data.  I want to separate each hours/min/sec since I have no timestamp I'm unable to make it work.  I get the first few to break, but then it goes back to breaking incorrectly. 07:05:00.140 [https-jsse-nio-8443-exec-17] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG120','347','334' Probably OK to ignore.Single entity not found. 07:05:00.126 [https-jsse-nio-8443-exec-17] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'CUSTAUTHSST1','19021','334' Probably OK to ignore.Single entity not found. 07:05:00.096 [https-jsse-nio-8443-exec-17] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG120','825','334' Probably OK to ignore.Single entity not found. 07:00:17.125 [https-jsse-nio-8443-exec-23] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] [Message: NO_EXACT_MATCH_FOUND_FOR_GEOCODE - No Exact match was found for the specified geocode, Message: AV_SUCCESSFUL_PROCESSING - Successful processing.] ... 45 common frames omitted at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:176) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:149) at org.jboss.jca.adapters.jdbc.WrappedConnection.prepareStatement(WrappedConnection.java:444) at org.jboss.jca.adapters.jdbc.WrappedConnection.lock(WrappedConnection.java:164) Caused by: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@50fdb291 ... 29 common frames omitted at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1566) at org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1598) at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1526) at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:220) at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:395) at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:505) at org.hibernate.loader.Loader.list(Loader.java:2599) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2604) at org.hibernate.loader.Loader.doList(Loader.java:2770) at org.hibernate.loader.Loader.doList(Loader.java:2787) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:351) at org.hibernate.loader.Loader.doQuery(Loader.java:949) at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1990) at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:2012) at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:2082) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:151) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:186) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:47) Caused by: org.hibernate.exception.GenericJDBCException: could not prepare statement ... 20 common frames omitted at com.thomsonreuters.persistence.helper.SabrixInternalExceptionDelegate.doExecute(SabrixInternalExceptionDelegate.java:87) at com.thomsonreuters.persistence.AbstractSabrixBaseRepository$6.call(AbstractSabrixBaseRepository.java:394) at com.thomsonreuters.persistence.AbstractSabrixBaseRepository$6.call(AbstractSabrixBaseRepository.java:398) at com.thomsonreuters.persistence.AbstractBaseRepository.findOneMatching(AbstractBaseRepository.java:753) at com.thomsonreuters.persistence.AbstractBaseRepository.findOne(AbstractBaseRepository.java:428) at org.springframework.data.jpa.repository.support.QuerydslJpaRepository.findOne(QuerydslJpaRepository.java:106) at com.querydsl.jpa.impl.AbstractJPAQuery.fetchOne(AbstractJPAQuery.java:253) at com.querydsl.jpa.impl.AbstractJPAQuery.getSingleResult(AbstractJPAQuery.java:183) at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1614) at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1575) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154) Caused by: javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not prepare statement at java.util.TimerThread.run(Timer.java:505) at java.util.TimerThread.mainLoop(Timer.java:555) at com.sabrix.scheduler.ScheduledTask.run(ScheduledTask.java:84) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTask(AutoContentDownloadTask.java:529) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTaskAndNotify(AutoContentDownloadTask.java:545) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTask(AutoContentDownloadTask.java:693) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.isTimeToRun(AutoContentDownloadTask.java:778) at com.sun.proxy.$Proxy135.getFrequency(Unknown Source) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at sun.reflect.GeneratedMethodAccessor857.invoke(Unknown Source) at com.sabrix.te.autocontentdownload.configuration.DefaultAutoContentDownloadSubsystemConfiguration.getFrequency(DefaultAutoContentDownloadSubsystemConfiguration.java:290) at com.sabrix.scheduler.SDIConfiguration.getAutoFrequency(SDIConfiguration.java:149) at com.sabrix.scheduler.SDIConfiguration.getAutoFreqConfig(SDIConfiguration.java:359) at com.sabrix.scheduler.SDIConfiguration.getConfig(SDIConfiguration.java:525) at com.thomsonreuters.persistence.taxentity.ConfigDao.findConfigByName(ConfigDao.java:69) at com.thomsonreuters.persistence.AbstractSabrixBaseRepository.findEntityByEntityKeyWithFinderException(AbstractSabrixBaseRepository.java:393) at com.thomsonreuters.persistence.helper.SabrixFinderExceptionDelegate.doExecute(SabrixFinderExceptionDelegate.java:65) at com.thomsonreuters.persistence.helper.SabrixInternalExceptionDelegate.doExecute(SabrixInternalExceptionDelegate.java:91) at com.thomsonreuters.persistence.helper.SabrixInternalExceptionDelegate.processException(SabrixInternalExceptionDelegate.java:135) com.sabrix.error.SabrixInternalException: Could not execute AbstractBaseRepository.findEntityByEntityKeyWithFinderException(). 06:57:48.030 [Timer-1] ERROR c.s.t.a.c.DefaultAutoContentDownloadSubsystemConfiguration - [EVENT FAILURE Anonymous:@unknown -> /ExampleApplication/com.sabrix.te.autocontentdownload.configuration.DefaultAutoContentDownloadSubsystemConfiguration] An error occurred whilst retrieving the auto content download frequency setting. Using default of "12" hours. at java.util.TimerThread.run(Timer.java:505) at java.util.TimerThread.mainLoop(Timer.java:555) at com.sabrix.scheduler.ScheduledTask.run(ScheduledTask.java:84) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTask(AutoContentDownloadTask.java:529) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTaskAndNotify(AutoContentDownloadTask.java:543) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.removeMessagesForChannel(AutoContentDownloadTask.java:585) at com.sun.proxy.$Proxy143.removeMessages(Unknown Source) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:367) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at sun.reflect.GeneratedMethodAccessor856.invoke(Unknown Source) at com.sabrix.te.messaging.management.NotificationMessageRemover.removeMessages(NotificationMessageRemover.java:88) at com.thomsonreuters.persistence.taxentity.NotificationMessageDao.findMessages(NotificationMessageDao.java:99) com.sabrix.messaging.management.MessageFinderRuntimeException: Could not execute AbstractBaseRepository.findEntitiesByEntityKey(Predicate). 06:57:48.030 [Timer-1] ERROR c.s.t.a.task.AutoContentDownloadTask - [EVENT FAILURE Anonymous:@unknown -> /ExampleApplication/com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask] An exception occurred during removal of messages on channel with id 200. 06:49:26.496 [https-jsse-nio-8443-exec-4] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG148','10222','334' Probably OK to ignore.Single entity not found. 06:49:26.488 [https-jsse-nio-8443-exec-4] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG192','7983','334' Probably OK to ignore.Single entity not found. 06:49:26.446 [https-jsse-nio-8443-exec-2] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG148','10222','334' Probably OK to ignore.Single entity not found. 06:49:26.437 [https-jsse-nio-8443-exec-2] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG192','7983','334' Probably OK to ignore.Single entity not found.
Hi Guys, am trying to configure Splunk to send me alerts through mobile when the requests against my web server are more than a specified value i ran the search and it shows me the requests numbe... See more...
Hi Guys, am trying to configure Splunk to send me alerts through mobile when the requests against my web server are more than a specified value i ran the search and it shows me the requests number and source IP but  i created an alert but this alert is not triggered at all(i viewed the triggered alerts menu and its empty) scheduled to one hour, number of results greater than 0 and selected actions Splunk secure gateway  my goal is send these events to my mobile and to SOAR when they greater than a value and configure playbook to automatically block the src_ip as its  mostly performing a DoS attack anybody can help me ? host=192.168.1.1 "DST=192.168.1.174"|stats count(SRC) AS Requests BY SRC |sort - Requests | where Requests>50