All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, we are a from a software editor integration team and we would like to help our customer to integrate easily our logs in their splunk. So we developped a python script using your samples and ... See more...
Hello, we are a from a software editor integration team and we would like to help our customer to integrate easily our logs in their splunk. So we developped a python script using your samples and our own python script to access our Audit trail API. The current script is working well outside splunk and retrieve our logs/ as soon as there are new indexes and forward the json result to stdout. But as soon as we put it inside Splunk we have "ERROR ExecProcessor" errors which are not very self explanatory. ----------------------------------- 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from... ...bin\scripts\Final-2.py"", line 57, in <module> 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" response = requests.get(url, headers={'Content-Type': 'application/json'}, cert=cert_context, verify = False) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from ... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\api.py", line 76, in get 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" return request('get', url, params=params, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\api.py", line 61, in request 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" return session.request(method=method, url=url, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\sessions.py", line 542, in request 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" resp = self.send(prep, **send_kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\sessions.py", line 655, in send 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" r = adapter.send(request, **kwargs) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\adapters.py", line 416, in send 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" self.cert_verify(conn, request.url, verify, cert) 08-30-2023 06:33:05.632 -0700 ERROR ExecProcessor [4316 ExecProcessor] - message from .... ...bin\scripts\Final-2.py"" File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\requests\adapters.py", line 250, in cert_verify  It seems our script is refused at the line  response = requests.get(url, headers={'Content-Type': 'application/json'}, cert=cert_context, verify = False) We tried with or without verify = False with no clues why its refused. Did you have any ideas about why it's stuck inside Splunk ? (we tried in Linux and in Windows with the same Result) Best regards, TrustBuilder team
Field names with special characters such as dots (.) need to be referenced in single quotes, plus it looks like you time value is in milliseconds not seconds (used by epoch time). Try this: | makere... See more...
Field names with special characters such as dots (.) need to be referenced in single quotes, plus it looks like you time value is in milliseconds not seconds (used by epoch time). Try this: | makeresults | fields - _time | eval alert.createdAt=1693398386408 | eval c_time=strftime ('alert.createdAt'/1000,"%m-%d-%Y %H:%M:%S.%3N") | table c_time
Solution here: https://community.splunk.com/t5/Knowledge-Management/Kvstore-Status-failed/m-p/656113#M9664
Feels like I tried every suggestion here, from renaming the cert, generating my own, messing around with the windows cert store and the .conf files... And this solution finally worked! Thank you so m... See more...
Feels like I tried every suggestion here, from renaming the cert, generating my own, messing around with the windows cert store and the .conf files... And this solution finally worked! Thank you so much!!
Try like this <init> <set token="input">!@#$%^&amp;*(){}|\";:&lt;&gt;/\\[]</set> </init> <row> <panel depends="$alwayshide$"> <html> <style> #escaped table tbod... See more...
Try like this <init> <set token="input">!@#$%^&amp;*(){}|\";:&lt;&gt;/\\[]</set> </init> <row> <panel depends="$alwayshide$"> <html> <style> #escaped table tbody td div.multivalue-subcell[data-mv-index="1"] { display: none; } </style> </html> </panel> <panel id="escaped"> <table> <title>$escaped$</title> <search> <query>| makeresults | fields - _time | eval param=$input|s$ | eval param=mvappend(param,replace(param,"([!@#$%^&amp;*\(\)\{\}\|\";:&lt;&gt;\/\\\[\]])","\\\\\1"))</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="escaped">mvindex($click.value$,1)</eval> </drilldown> </table> </panel> </row>
Hi All,   For those who are familiar with AWS Cloudtrail logs, these have details about every api call, every event that occurs in your AWS account.  Is there an equivalent  of the same in Azure that... See more...
Hi All,   For those who are familiar with AWS Cloudtrail logs, these have details about every api call, every event that occurs in your AWS account.  Is there an equivalent  of the same in Azure that can be ingested in Splunk ? We have "Splunk Add-on for Microsoft Cloud Services" installed in our environment.    What input or config is required to pull in cloudtrail type equivalent logs ??   As of now,  we are getting compute logs and Azure AD events via this add-on.
Hello, is this solved? Support can help you. This may increase load on your indexers.  
The primary data our origination needs to ingest from SNOW is in the form of reports created in SNOW.  This add on does not allow for this ingestion currently, requiring our org to develop our own ap... See more...
The primary data our origination needs to ingest from SNOW is in the form of reports created in SNOW.  This add on does not allow for this ingestion currently, requiring our org to develop our own app that recently broke due to SNOW api update breaking pagination.    Ingestion of incident and request data into Splunk is not overall useful as the need for correlation of all individual log events for specific tickets makes searching or monitoring very difficult.   Is there any plan to add SNOW report ingestion as this would be far more useful data to ingest to Splunk
Hello everyone, I am going crazy trying to figure out why this isn't working.  I have a field called "alert.createdAt" that contains an EPOCH time.  (1693398386408).  I need to convert this to be H... See more...
Hello everyone, I am going crazy trying to figure out why this isn't working.  I have a field called "alert.createdAt" that contains an EPOCH time.  (1693398386408).  I need to convert this to be Human Readable (08/30/2023 09:26:47).  However, when using the strftime, I don't see anything being returned. My Search is: SEARCH | eval c_time=strftime (alert.createdAt,"%m-%d-%Y %H:%M:%S") | table c_time I have been going thru all of the previous solutions I could find, but I can't seem to get this to work.  Is there another way to achieve this, or am I just way off on how I am trying to do this. : ) Thanks for any help, much appreciated Tom        
Hello, thank you for all of the help.   I think I'm all set now.  
I have a Dell Equallogic Group Manager  (san server)  that's hasn't been sending logs to syslog.  I've added all the IPs for the server, pinged and did traceroute for them with no issues, yet logs ar... See more...
I have a Dell Equallogic Group Manager  (san server)  that's hasn't been sending logs to syslog.  I've added all the IPs for the server, pinged and did traceroute for them with no issues, yet logs are still not sending.  Anyone have a solution?   Thanks
Thanks @ITWhisperer. This is exactly what I'm looking for, but my text comes from a token. If I do something like below, I get " Error in 'SearchParser': Mismatched ']'." <init> <eval token="inp... See more...
Thanks @ITWhisperer. This is exactly what I'm looking for, but my text comes from a token. If I do something like below, I get " Error in 'SearchParser': Mismatched ']'." <init> <eval token="input">"!@#$%^&amp;*(){}|\";:&lt;&gt;/\\[]"</eval> </init> <row> <panel depends="$alwayshide$"> <html> <style> #escaped table tbody td div.multivalue-subcell[data-mv-index="1"] { display: none; } </style> </html> </panel> <panel id="escaped"> <table> <title>$escaped$</title> <search> <query>| makeresults | fields - _time | eval param="$input$" | eval param=mvappend(param,replace(param,"([!@#$%^&amp;*\(\)\{\}\|\";:&lt;&gt;\/\\\[\]])","\\\\\1"))</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="escaped">mvindex($click.value$,1)</eval> </drilldown> </table> </panel> </row>  
SmartStore has no effect on your license.  License use is determined by the amount of data written to disk, regardless of the storage method used.
index=main source="*eligible*" "/api/info/eligible" [| makeresults | addinfo | eval row=mvrange(0,3) | mvexpand row | eval row=if(row=2,7,row) | eval earliest=relative_time(info_min_time,"-".row."d")... See more...
index=main source="*eligible*" "/api/info/eligible" [| makeresults | addinfo | eval row=mvrange(0,3) | mvexpand row | eval row=if(row=2,7,row) | eval earliest=relative_time(info_min_time,"-".row."d") | eval latest=relative_time(info_max_time,"-".row."d") | table earliest latest] | bin _time span=1d | stats count by _time | addinfo | eval day=case(_time>=relative_time(info_max_time,"-1d"),"Today",_time>=relative_time(info_max_time,"-2d"),"Yesterday",true(),"LastWeek") | eval {day}=count | fields - count _time info_* day | stats values(*) as * | eval dailychange=100*Today/Yesterday | eval weeklychange=100*Today/LastWeek
Hello, we have a large multi monitor screen at the front of the floor plate, We have our splunk dashboards showing however we have different dashboards that periodically need to be shown. Is th... See more...
Hello, we have a large multi monitor screen at the front of the floor plate, We have our splunk dashboards showing however we have different dashboards that periodically need to be shown. Is there a way to automatically go through each of these URLs or dashboards to view each individual dhasboard instead of doing it manually. Many thanks
Hello, I've just updated my Splunk Security Essentials application from 3.7 to 3.7.1. After the update, the dashboard in Content>Manage bookmarks show the following error: Anyone seen that bef... See more...
Hello, I've just updated my Splunk Security Essentials application from 3.7 to 3.7.1. After the update, the dashboard in Content>Manage bookmarks show the following error: Anyone seen that before? Thanks.
We have already enabled the Splunk logging driver,  but this forwards logs from inside the containers.   I want to capture the docker system-level events, as you would see from this command:     ... See more...
We have already enabled the Splunk logging driver,  but this forwards logs from inside the containers.   I want to capture the docker system-level events, as you would see from this command:     docker events --filter event=stop --since '60m'     https://docs.docker.com/engine/reference/commandline/system_events/   I see this app (not approved for cloud).   Are there any other options? https://splunkbase.splunk.com/app/6113 https://github.com/quzen/docker_analyzer/blob/main/bin/docker_events.py
Here is a sample of my data.  I want to separate each hours/min/sec since I have no timestamp I'm unable to make it work.  I get the first few to break, but then it goes back to breaking incorrectly.... See more...
Here is a sample of my data.  I want to separate each hours/min/sec since I have no timestamp I'm unable to make it work.  I get the first few to break, but then it goes back to breaking incorrectly. 07:05:00.140 [https-jsse-nio-8443-exec-17] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG120','347','334' Probably OK to ignore.Single entity not found. 07:05:00.126 [https-jsse-nio-8443-exec-17] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'CUSTAUTHSST1','19021','334' Probably OK to ignore.Single entity not found. 07:05:00.096 [https-jsse-nio-8443-exec-17] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG120','825','334' Probably OK to ignore.Single entity not found. 07:00:17.125 [https-jsse-nio-8443-exec-23] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] [Message: NO_EXACT_MATCH_FOUND_FOR_GEOCODE - No Exact match was found for the specified geocode, Message: AV_SUCCESSFUL_PROCESSING - Successful processing.] ... 45 common frames omitted at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:176) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:149) at org.jboss.jca.adapters.jdbc.WrappedConnection.prepareStatement(WrappedConnection.java:444) at org.jboss.jca.adapters.jdbc.WrappedConnection.lock(WrappedConnection.java:164) Caused by: java.sql.SQLException: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@50fdb291 ... 29 common frames omitted at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1566) at org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1598) at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1526) at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:220) at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:395) at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:505) at org.hibernate.loader.Loader.list(Loader.java:2599) at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2604) at org.hibernate.loader.Loader.doList(Loader.java:2770) at org.hibernate.loader.Loader.doList(Loader.java:2787) at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:351) at org.hibernate.loader.Loader.doQuery(Loader.java:949) at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1990) at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:2012) at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:2082) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:151) at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:186) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:47) Caused by: org.hibernate.exception.GenericJDBCException: could not prepare statement ... 20 common frames omitted at com.thomsonreuters.persistence.helper.SabrixInternalExceptionDelegate.doExecute(SabrixInternalExceptionDelegate.java:87) at com.thomsonreuters.persistence.AbstractSabrixBaseRepository$6.call(AbstractSabrixBaseRepository.java:394) at com.thomsonreuters.persistence.AbstractSabrixBaseRepository$6.call(AbstractSabrixBaseRepository.java:398) at com.thomsonreuters.persistence.AbstractBaseRepository.findOneMatching(AbstractBaseRepository.java:753) at com.thomsonreuters.persistence.AbstractBaseRepository.findOne(AbstractBaseRepository.java:428) at org.springframework.data.jpa.repository.support.QuerydslJpaRepository.findOne(QuerydslJpaRepository.java:106) at com.querydsl.jpa.impl.AbstractJPAQuery.fetchOne(AbstractJPAQuery.java:253) at com.querydsl.jpa.impl.AbstractJPAQuery.getSingleResult(AbstractJPAQuery.java:183) at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1614) at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1575) at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154) Caused by: javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not prepare statement at java.util.TimerThread.run(Timer.java:505) at java.util.TimerThread.mainLoop(Timer.java:555) at com.sabrix.scheduler.ScheduledTask.run(ScheduledTask.java:84) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTask(AutoContentDownloadTask.java:529) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTaskAndNotify(AutoContentDownloadTask.java:545) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTask(AutoContentDownloadTask.java:693) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.isTimeToRun(AutoContentDownloadTask.java:778) at com.sun.proxy.$Proxy135.getFrequency(Unknown Source) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:205) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at sun.reflect.GeneratedMethodAccessor857.invoke(Unknown Source) at com.sabrix.te.autocontentdownload.configuration.DefaultAutoContentDownloadSubsystemConfiguration.getFrequency(DefaultAutoContentDownloadSubsystemConfiguration.java:290) at com.sabrix.scheduler.SDIConfiguration.getAutoFrequency(SDIConfiguration.java:149) at com.sabrix.scheduler.SDIConfiguration.getAutoFreqConfig(SDIConfiguration.java:359) at com.sabrix.scheduler.SDIConfiguration.getConfig(SDIConfiguration.java:525) at com.thomsonreuters.persistence.taxentity.ConfigDao.findConfigByName(ConfigDao.java:69) at com.thomsonreuters.persistence.AbstractSabrixBaseRepository.findEntityByEntityKeyWithFinderException(AbstractSabrixBaseRepository.java:393) at com.thomsonreuters.persistence.helper.SabrixFinderExceptionDelegate.doExecute(SabrixFinderExceptionDelegate.java:65) at com.thomsonreuters.persistence.helper.SabrixInternalExceptionDelegate.doExecute(SabrixInternalExceptionDelegate.java:91) at com.thomsonreuters.persistence.helper.SabrixInternalExceptionDelegate.processException(SabrixInternalExceptionDelegate.java:135) com.sabrix.error.SabrixInternalException: Could not execute AbstractBaseRepository.findEntityByEntityKeyWithFinderException(). 06:57:48.030 [Timer-1] ERROR c.s.t.a.c.DefaultAutoContentDownloadSubsystemConfiguration - [EVENT FAILURE Anonymous:@unknown -> /ExampleApplication/com.sabrix.te.autocontentdownload.configuration.DefaultAutoContentDownloadSubsystemConfiguration] An error occurred whilst retrieving the auto content download frequency setting. Using default of "12" hours. at java.util.TimerThread.run(Timer.java:505) at java.util.TimerThread.mainLoop(Timer.java:555) at com.sabrix.scheduler.ScheduledTask.run(ScheduledTask.java:84) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTask(AutoContentDownloadTask.java:529) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.runTaskAndNotify(AutoContentDownloadTask.java:543) at com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask.removeMessagesForChannel(AutoContentDownloadTask.java:585) at com.sun.proxy.$Proxy143.removeMessages(Unknown Source) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:118) at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:367) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at sun.reflect.GeneratedMethodAccessor856.invoke(Unknown Source) at com.sabrix.te.messaging.management.NotificationMessageRemover.removeMessages(NotificationMessageRemover.java:88) at com.thomsonreuters.persistence.taxentity.NotificationMessageDao.findMessages(NotificationMessageDao.java:99) com.sabrix.messaging.management.MessageFinderRuntimeException: Could not execute AbstractBaseRepository.findEntitiesByEntityKey(Predicate). 06:57:48.030 [Timer-1] ERROR c.s.t.a.task.AutoContentDownloadTask - [EVENT FAILURE Anonymous:@unknown -> /ExampleApplication/com.sabrix.te.autocontentdownload.task.AutoContentDownloadTask] An exception occurred during removal of messages on channel with id 200. 06:49:26.496 [https-jsse-nio-8443-exec-4] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG148','10222','334' Probably OK to ignore.Single entity not found. 06:49:26.488 [https-jsse-nio-8443-exec-4] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG192','7983','334' Probably OK to ignore.Single entity not found. 06:49:26.446 [https-jsse-nio-8443-exec-2] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG148','10222','334' Probably OK to ignore.Single entity not found. 06:49:26.437 [https-jsse-nio-8443-exec-2] INFO com.sabrix - [EVENT SUCCESS Anonymous:@unknown -> /ExampleApplication/com.sabrix] Failed to look up error 'USSG192','7983','334' Probably OK to ignore.Single entity not found.
Hi, I tried to add the capabilities listed above, but the user get still the same answer back as before: User is not allowed to modify the job
Hi Guys, am trying to configure Splunk to send me alerts through mobile when the requests against my web server are more than a specified value i ran the search and it shows me the requests numbe... See more...
Hi Guys, am trying to configure Splunk to send me alerts through mobile when the requests against my web server are more than a specified value i ran the search and it shows me the requests number and source IP but  i created an alert but this alert is not triggered at all(i viewed the triggered alerts menu and its empty) scheduled to one hour, number of results greater than 0 and selected actions Splunk secure gateway  my goal is send these events to my mobile and to SOAR when they greater than a value and configure playbook to automatically block the src_ip as its  mostly performing a DoS attack anybody can help me ? host=192.168.1.1 "DST=192.168.1.174"|stats count(SRC) AS Requests BY SRC |sort - Requests | where Requests>50