All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   I am attempting to create a search for a password spraying attempt. I need the IP address and Hostname made with the different login names attempted to login to a particular machine within th... See more...
Hi,   I am attempting to create a search for a password spraying attempt. I need the IP address and Hostname made with the different login names attempted to login to a particular machine within the last 5 min. Also, the number of login attempts should be more than 10.   I created the below search, but that's pulling me wrong data. A sample data am expecting is attached in the screenshot index=win* EventCode=4625 Logon_Type=3 Target_User_Name!="" src_ip!="-" |bucket span=5m _time |stats dc(TargetUserName) AS Unique_accounts values(TargetUserName) as tried_accounts by _time, src_ip Source_Workstation |eventstats avg(Unique_accounts) as global_avg, stdev(Unique_accounts) as global_std |eval upperBound=(comp_avg+comp_std*3) |eval isOutlier=if(Unique_accounts>10 and Unique_accounts>=upperBound, 1, 0) |sort -Unique_accounts   Thanks in advance.
Hi All,   I have an JSON file that is ingested into Splunk, I need to create a dashboard with the different API's and their traffic fields extracted are having the the timestamp as well and it mak... See more...
Hi All,   I have an JSON file that is ingested into Splunk, I need to create a dashboard with the different API's and their traffic fields extracted are having the the timestamp as well and it makes it difficult to be used directly for the dashboard queries data{}.fca-accounts-metrics-api-v1.08/23/2021.Number of Failure Traffic Below is the extract of the JSON content { "org": "xxx", "env": "prod", "from_date": "08/23/2021 00:00:00", "curr_date": "08/23/2021 23:59:59", "data": [ { "management-api-v1": { "08/23/2021": { "Total Number of Traffic": "0.0", "Average Request Turnaround(ms)": "0.0", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "0.0" } } }, { "sXXXX-api-v1": { "08/23/2021": { "Total Number of Traffic": "2113.0", "Average Request Turnaround(ms)": "57.68", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "2108.0" } } }, { "sXX-api-v1": { "08/23/2021": { "Total Number of Traffic": "0.0", "Average Request Turnaround(ms)": "0.0", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "0.0" } } }, { "open-banking-v31": { "08/23/2021": { "Total Number of Traffic": "0.0", "Average Request Turnaround(ms)": "0.0", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "0.0" } } }, { "fca-accounting-metrics-api-v1": { "08/23/2021": { "Total Number of Traffic": "135.0", "Average Request Turnaround(ms)": "57.66", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "136.0" } } } ] }   Is there a way to extract the API names and the traffic details. if this could be made to a tabular form with the date and api names and its traffic details it would be great to create a dashboard or chart. 
I am trying to find the occurrence whenever the state changes due to the error. Below are my sample events: 2021/08/01 07:12:12.098 host=12345 In 2021/08/01 07:13:12.098 host=12345 In 2021/08/01 0... See more...
I am trying to find the occurrence whenever the state changes due to the error. Below are my sample events: 2021/08/01 07:12:12.098 host=12345 In 2021/08/01 07:13:12.098 host=12345 In 2021/08/01 07:14:12.098 host=12345 Out 2021/08/01 07:15:12.098 host=12345 Out 2021/08/01 07:16:12.098 host=12345 In 2021/08/01 07:17:12.098 host=12345 In 2021/08/01 07:18:12.098 host=12345 Out 2021/08/01 07:18:35.098 host=12345 ERROR 2021/08/01 07:19:12.098 host=12345 In 2021/08/01 07:20:12.098 host=12345 Out I need to group the events when the state (In/Out) changed due an ERROR event. For the above sample events, I should not get any result. Because, when the ERROR event happened, the host is already in "Out" stage. We need to monitor only when a "In" host changes to "Out" due to an ERROR. I tried the below search   index=myindex ("Cut-In" OR "Cut-Out" OR "ERROR") | rex "host=(?<host>\d+) (?<State>.*)" | transaction host startswith="State=In" endswith="Out" maxspan=24h | where searchmatch("ERROR") | table _time host   But the above query returns a result by grouping the "In" state which logged at "07:16:12" as start of the transaction and "07:20:12" as end of the transaction. This is not a valid scenario. Please help me in framing the logic.
[Updated] HI All, @ITWhisperer  Please help me on this I have data like below -  HostName LastConnected ABC 23/08/2021 10:04 ABC 23/08/2021 10:34 AAA 23/08/2021 12:01 AAA 23/... See more...
[Updated] HI All, @ITWhisperer  Please help me on this I have data like below -  HostName LastConnected ABC 23/08/2021 10:04 ABC 23/08/2021 10:34 AAA 23/08/2021 12:01 AAA 23/08/2021 12:32 AAA 23/08/2021 13:03 AAA 23/08/2021 13:34 ABC 23/08/2021 17:03 AAA 23/08/2021 15:01 AAA 23/08/2021 15:35 ABC 23/08/2021 14:00 AAA 23/08/2021 21:02 AAA 23/08/2021 22:03 AAA 23/08/2021 20:02 ABC 23/08/2021 11:02 ABC 23/08/2021 11:34 ABC 23/08/2021 12:02 ABC 23/08/2021 13:34 AAA 23/08/2021 14:02 AAA 23/08/2021 14:34 ABC 23/08/2021 15:04 ABC 23/08/2021 16:34 ABC 23/08/2021 16:05 ABC 23/08/2021 22:02 ABC 23/08/2021 23:36 AAA 23/08/2021 11:03 ABC 24/08/2021 11:36 AAA 24/08/2021 12:03 ABC 24/08/2021 11:00 AAA 24/08/2021 12:36 ABC 23/08/2021 17:36 AAA 23/08/2021 20:32 AAA 23/08/2021 21:32   Now, i want output like this    HostName TotalHours Max_Consecutive  23/08/2021 10 23/08/2021 11 23/08/2021 12 23/08/2021 13 23/08/2021 14 23/08/2021 15 23/08/2021 16 23/08/2021 17 23/08/2021 18 23/08/2021 19 23/08/2021 20 23/08/2021 21 23/08/2021 22 23/08/2021 23 24/08/2021 11 24/08/2021 12 24/08/2021 13 24/08/2021 14 24/08/2021 15 ABC 4 2 23/08/2021 10:04 23/08/2021 10:34 offline 23/08/2021 12:02 23/08/2021 13:34 23/08/2021 14:00 23/08/2021 15:04 23/08/2021 16:34 23/08/2021 16:05 23/08/2021 17:03 23/08/2021 17:34 offline offline offline offline 23/08/2021 22:02 23/08/2021 23:36 24/08/2021 11:36 24/08/2021 11:00 offline offline offline offline AAA 8 5 offline 23/08/2021 11:02 23/08/2021 11:34 23/08/2021 12:01 23/08/2021 12:32 23/08/2021 13:03 23/08/2021 13:34 23/08/2021 14:02 23/08/2021 14:34 23/08/2021 15:01 23/08/2021 15:35 offline offline offline offline 23/08/2021 20:02 23/08/2021 20:32 23/08/2021 21:02 23/08/2021 21:32 23/08/2021 22:03 offline offline 24/08/2021 12:03 24/08/2021 12:36 offline offline offline   Note:- I have more than 2 lakhs records, and if user select one week data it should be work for a week If it is connected complete hour ,then it is online - means two times in hour We if mvcount >=2 then it is online , we need to count If it is 1 - no need count keep as it is  0 - offlilne   Thank you in advance 
I am trying to export and import a dashboard using the Controller API, and using the Postman tool. Ref export, I have this working OK: I created an API Client with the administrator role I used t... See more...
I am trying to export and import a dashboard using the Controller API, and using the Postman tool. Ref export, I have this working OK: I created an API Client with the administrator role I used the Controller API with the API Client credentials to generate a Bearer Token, https://{{controller_uri}}//controller/api/oauth/access_token I successfully exported a dashboard https://{{controller_uri}}/controller/CustomDashboardImportExportServlet?dashboardId=12355 Headers: Authorization:Bearer {{bearer_token}} Now when I try to import a dashboard using the API, with: A new dashboard name that currently doesn’t exist, Using basic authentication (my user account which also has admin access) because the import API does not support the use of the Bearer Token (open enhancement exists: Internal story ID https://jira.corp.appdynamics.com/browse/METADATA-9305). .. I simply get a 500 response. What I tried is: Method: POST URI: https://{{controller_uri}}/controller/CustomDashboardImportExportServlet BODY: The json of the previously exported dashboard Content-Type: application/json As per the documentation, I also tried using CURL and it worked: https://docs.appdynamics.com/4.5.x/en/extend-appdynamics/appdynamics-apis/configuration-import-and-export-api curl -X POST --user Allister.Green@RSAGroup:<pw> https://<domain uri>/controller/CustomDashboardImportExportServlet -F file=@dashboard.json Because the curl example uses a file, I also tried using a file with Postman instead of using the dashboard json as the message body, but this also generated a 500 response. To use a file: Body: form-data KEY: file, VALUE: <filename>, CONTENT TYPE application/json Has anyone got dashboard imports working using Postman, and if so, please can you share how. Thanks, Allister.
Hi, After we upgrade our Splunk version to 8.2.1, Under the monitoring console `overview` tab, the chart under `CPU usage by process` and `Memory Usage by process` are empty. However, they do app... See more...
Hi, After we upgrade our Splunk version to 8.2.1, Under the monitoring console `overview` tab, the chart under `CPU usage by process` and `Memory Usage by process` are empty. However, they do appear under the `Resource Usage: Instance` tab. Is anyone able to help why the chart is empty under the `overview` tab of the monitoring console?  
Hello In my base search I'm looking for stores with the minimum count of 1 for 4 differend kind of errors. I count the errors, put them in a xyseries table and filter them out - which works great. ... See more...
Hello In my base search I'm looking for stores with the minimum count of 1 for 4 differend kind of errors. I count the errors, put them in a xyseries table and filter them out - which works great. Now i would like to know which stores on which day hit all the criterias. -----------------------------------                 Code ----------------------------------- index=main host=* (thrown NotFoundException:Not found) OR (X-30056) OR (Interceptor for tx_pool ITransactionPool has thrown exception, unwinding now) OR (SocketTimeoutException Read Timeout) | rex field=_raw "An accepted error occurred:.(?<exception>\w+-\d+):." | rex field=_raw "SocketTimeoutException: R(?<exception>\w+.\w+)" | rex field=_raw "serverDataState:.(?<exception>\w+.\w+)" | rex field=_raw "Caused by: java.io.InterruptedIOException:.(?<exception>.*)" | rex field=_raw "thrown NotFoundException:(?<exception>\w+.\w+)" | eval ccc = cooperative+cost_center | stats count by ccc exception | xyseries ccc exception count | search X-30056 > 0 AND "Read Timeout" > 0 AND "Not found" > 0 AND "Output operation aborted" > 0 -----------------------------------                Result ----------------------------------- ccc X-30056 Not found Output operation aborted Read Timeout Read Timeout Read timed 0011111 339 6 12 193 364 0022222 620 4 1 640 992 1 0033333 588 4 7 2549 4956 1 What I would like to achieve is the following: Date                 ccc 08/17/2021 0011111 08/18/2021 0022222 08/20/2021 0033333 I'm thankful for any help!
Hi, I have the following SPL as a dashboard panel which shows realtime searches. This is so I can contact the owners and discuss them converting to a scheduled report instead: | rest /services/searc... See more...
Hi, I have the following SPL as a dashboard panel which shows realtime searches. This is so I can contact the owners and discuss them converting to a scheduled report instead: | rest /services/search/jobs | search eventSorting=realtime | eval author=upper(author) | lookup snow_sys_user_list.csv user_name as author | table author label eventSearch dv_name dispatchState, eai:acl.owner, isRealTimeSearch, performance.dispatch.stream.local.duration_secs, runDuration, searchProviders, splunk_server However, the panel is still showing reports that have been converted to scheduled reports/alerts or deleted entirely. Is there some SPL I have to add to get it to only see "active" real-time searches? Thanks      
Hello, I have updated a Splunk Cluster from V7.3.8 to V8.1.2 following the documentation provided by Splunk and since the update we have an issue with the scheduled searches Schedule searches are r... See more...
Hello, I have updated a Splunk Cluster from V7.3.8 to V8.1.2 following the documentation provided by Splunk and since the update we have an issue with the scheduled searches Schedule searches are running normally after a Searchhead Cluster restart but after some time they skipping  on the Capitan and and they do not run at all on the other nodes In the screenshots above Scheduled Searches where running until  8 AM CET and then all are skipped on the Captain and the other Search Heads did not process any Scheduled Searches. I found a workaround to move the Captain to another SearchHead and then Schedules Searches will run again.  As seen in the example above The cluster is composed of 3 Indexers, 3 SearchHeads and 1 Master node I have increased the Relative concurrency limit for scheduled searches to 70% and Relative concurrency limit for scheduled searches to the same 70% Also adapted the limits.conf to  # The base number of concurrent searches. base_max_searches = 60 # Max real-time searches = max_rt_search_multiplier x max historical searches. # max_rt_search_multiplier = 1 # The maximum number of concurrent searches per CPU. max_searches_per_cpu = 10 max_searches_perc = 60 But nothing helps   A sure way to reproduce this on the system is to stop one of the SearchHeads and then start it. Aprox 10 Minutes after the SearchHead starts all scheduled searches will be skipped on the Capitan   In the Logs there is only one type of "Error" (actually info message) :  _ACCELERATE_AF2AEFDE-8E13-4DCA-90CB-C21D356D9A60_iqpress_nobody_e0c3b6f1a41c2518_ACCELERATE_ The maximum number of concurrent historical scheduled searches on this cluster has been reached (220)   Thank you very much in advance    
Hi all, my data as below: 11111_aaaa/ppppaaaa 1110_bb/kjm I want to remove anything after /, like this 11111_aaaa 1110_bb   Thanks.
Hi everyone, I'm a bit confused about the retention time of an index. I have created an index (via indexes.conf) with 90 days retention time and max volume of 50GB... so, I always knew, that the log... See more...
Hi everyone, I'm a bit confused about the retention time of an index. I have created an index (via indexes.conf) with 90 days retention time and max volume of 50GB... so, I always knew, that the logs are gonna delete if the index has reached the max volume or the time has reached 90 days... But in my case my index has 4.8 GB reached and the oldest event is from the 1st of May, which is more than 90 days... Do I understand this wrong?
Hi All, I'm trying to on-board logs which are related to litigation hold from our exchange servers. So far we've added the add-on: "TA-Exchange-Mailbox".   Which other add-on should we configure?... See more...
Hi All, I'm trying to on-board logs which are related to litigation hold from our exchange servers. So far we've added the add-on: "TA-Exchange-Mailbox".   Which other add-on should we configure? What monitor stanza should be enabled?   Thanks in advance! 
I need all the stats on x-axis . 
Hello, I have some issues to create PROPS Conf file for following sample data events. It's a text file with header in it. I created one, but not working. Thank you so much, any help will be highly ap... See more...
Hello, I have some issues to create PROPS Conf file for following sample data events. It's a text file with header in it. I created one, but not working. Thank you so much, any help will be highly appreciated   Sample Events UserId, UserType, System, EventType, EventId, STF, SessionId, SourceAddress, RCode, ErrorMsg, Timestamp, Dataload, Period, WFftCode, ReturnType, DataType 2021-08-19 08:05:52,763-CDT - SFTCE,IDCSEE,SATA,FA,FETCHFI,000000000,E3CE4819360E57124D220634E0D,sata,00,Successful,20210819130552,SCM3R8,,,1,0 2021-08-19 08:06:53,564-CDT - SFTCE,IDCSEE,SATA,FA,FETCHFI,000000000,E3CE4819360E57124D220634E0D,sata,00,Successful,20210819130653,SCM3R8,,,1,0 What I wrote my PROPS Conf file [ __auto__learned__ ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) INDEXED_EXTRACTIONS=psv TIME_FORMAT=%Y-%m-%d %H:%M:%S .%3N TIMESTAMP_FIELDS=TIMESTAMP  
From the logs, I need to get the count of events from the below msg field value which matches factType=COMMERCIAL and has filters.  secondly,   extract the filter type used, like in the example belo... See more...
From the logs, I need to get the count of events from the below msg field value which matches factType=COMMERCIAL and has filters.  secondly,   extract the filter type used, like in the example below id and extract the string sorts={"sortOrders":[{"key":"id","order":"DESC"}]}.  Using the Splunk query with basic wildcard does not work efficiently. Could you please assist cf_space_name=prod msg="*/facts?factType=COMMERCIAL&sourceSystem=ADMIN&sourceOwner=ABC&filters=*"   msg: abc.asia - [2021-08-23T00:27:08.152+0000] "GET /facts?factType=COMMERCIAL&sourceSystem=ADMIN&sourceOwner=ABC&filters=%257B%2522stringMatchFilters%2522:%255B%257B%2522key%2522:%2522BFEESCE((json_data-%253E%253E'isNotSearchable')::boolean,%2520false)%2522,%2522value%2522:%2522false%2522,%2522operator%2522:%2522EQ%2522%257D%255D,%2522multiStringMatchFilters%2522:%255B%257B%2522key%2522:%2522json_data-%253E%253E'id'%2522,%2522values%2522:%255B%25224970111%2522%255D%257D%255D,%2522containmentFilters%2522:%255B%255D,%2522nestedMultiStringMatchFilter%2522:%255B%255D,%2522nestedStringMatchFilters%2522:%255B%255D%257D&sorts=%257B%2522sortOrders%2522:%255B%257B%2522key%2522:%2522id%2522,%2522order%2522:%2522DESC%2522%257D%255D%257D&pagination=null Thanks in advance.
I get error messages in ES saying the the API Key for app called MITRE ATT&CK needed to be corrected. I really have tried but do not know where to find an API key for this app. Thank u in advance.
Hi, I need help in searching field value from the first search to another search with deferent sourcetype and combine both search fields in one table. but the issue is filed name is the same in bot... See more...
Hi, I need help in searching field value from the first search to another search with deferent sourcetype and combine both search fields in one table. but the issue is filed name is the same in both sourcetype but the values are different. example: Sourcetype 1 has filed name "user" with value "ABCD" sourcetype 2 has filed name "user" with value "xxx\\ABCD" I tried with below query but not getting the output sourcetype=sourcetype1 | eval User="*".User | table User | join User [search sourcetype=sourcetype2 | fields User HostName HostIP FileName Timestamp Message] | table User Email HostName HostIP FileName  Message
Hello, I wanted to request help with how configuring  correctly SSL between Universal -> Indexer. I tried following this procedure: https://docs.splunk.com/Documentation/Splunk/8.2.1/Security/Howt... See more...
Hello, I wanted to request help with how configuring  correctly SSL between Universal -> Indexer. I tried following this procedure: https://docs.splunk.com/Documentation/Splunk/8.2.1/Security/Howtoself-signcertificates And I ended with two public certificates: myServerCertificate.pem myServerPrivateKey.key myCACertificate.pem Afterwards I prepared the certificate in the following order: https://docs.splunk.com/Documentation/Splunk/8.2.1/Security/HowtoprepareyoursignedcertificatesforSplunk cat myServerCertificate.pem myServerPrivateKey.key myCACertificate.pem > myNewServerCertificate.pem This resulted with a signed server certificate with a chain of the authority. I am struggling with understating what exactly goes where and in case I understand it, how do I add one more cert to another server?.. My mind says, Indexer has to have the private key -> (Not sure whether the authorities key, or the server key or the chain). And what the forwarder needs to have is -> only public key. (Not sure what) Summary of what I have running the whole commands: myCAPrivateKey.key myCACertificate.csr myCACertificate.pem myServerPrivateKey.key myServerCertificate.csr myServerCertificate.pem myNewServerCertificate.pem Appreciate your help.
I logged in and switched to the free version.  The search error I am receiving: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Re... See more...
I logged in and switched to the free version.  The search error I am receiving: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK.   Licensing shows : Free license group     Messages: Any idea to clear or reset to make work? 
Hi all, I am looking to check if there has been a event within the last 3 hrs for three different categories.  If an event has been detected in the last 3 hours, I would like a status column that... See more...
Hi all, I am looking to check if there has been a event within the last 3 hrs for three different categories.  If an event has been detected in the last 3 hours, I would like a status column that says "Registry In Sync", otherwise the status column should read "Out of Sync".  Something like the following:   Type _time Status A 2021-08-10 09:27:07 Out of Sync B 2021-08-23 01:24:56 Registry is in Sync C 2021-08-19 23:25:28 Out of Sync   The important this is that it is categorised by the Type field.  I appreciate any and all help!