All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The latest version of the linux x64 php-agent (21.7.0.4560) is packaged with some out of date components: netty (4.1.38). Currently this has some CVEs logged against it: CVE-2019-20445 CVE-2019-20... See more...
The latest version of the linux x64 php-agent (21.7.0.4560) is packaged with some out of date components: netty (4.1.38). Currently this has some CVEs logged against it: CVE-2019-20445 CVE-2019-20444 under the path: /proxy/lib/tp/grpc-netty-shaded-1.24.0.jar Anyone know if this is something that can be patched, or if there is an intention to include a more up-to-date version in a future build?
If we want to use the Splunk as Central log monitoring tools how can we monitor the COTS application logs in Splunk?
Usually splunk seems to interpret hypens for event viewer as folders.  I have this input but its not working. [WinEventLog://Microsoft-ServerManagementExperience disabled = 0 index = wineventlog He... See more...
Usually splunk seems to interpret hypens for event viewer as folders.  I have this input but its not working. [WinEventLog://Microsoft-ServerManagementExperience disabled = 0 index = wineventlog Here is a screenshot of the folder i'd like to monitor with Splunk.
I am not sure if anyone else has encountered this, but in our distributed environment that was just upgraded from 8.0.3 to 8.2.2, we have noticed issues with the health report manager.  The new IOWai... See more...
I am not sure if anyone else has encountered this, but in our distributed environment that was just upgraded from 8.0.3 to 8.2.2, we have noticed issues with the health report manager.  The new IOWait feature in the health report is extremely "chatty" even though all other aspects of the deployment are in great shape.  Even though we can successfully disable the IOWait feature in the console and via a local health.conf file, the feature is still being included in the health report.  I've opened up a case with Splunk support, but was just wondering if anyone else has encountered this behavior.
Hello, I have some issues to create PROPS Conf file for following sample data events. It's a text file with header in it. I created one, but not working. Thank you so much, any help will be highly ap... See more...
Hello, I have some issues to create PROPS Conf file for following sample data events. It's a text file with header in it. I created one, but not working. Thank you so much, any help will be highly appreciated. I am giving the events below ......UserID and Timestamp values are marked in bold below UserId, UserType, System, EventType, EventId, STF, SessionId, SourceAddress, RCode, ErrorMsg, Timestamp, Dataload, Period, WFftCode, ReturnType, DataType 2021-08-19 08:05:52,763-CDT - FETCE,SRGEE,SAATCA,FETCHFA,FI,000000000,E3CE4819360E57124D220634E0D,saatca,00,Successful,20210819130552,UCJ3R8,,,1,0 2021-08-19 08:06:53,564-CDT - FETCE,SRGEE,SAATCA,FA,FETCHFI,000000000,E3CE4819360E57124D220634E0D,saatca,00,Successful,20210819130653,UCJ3R8,,,1,0   What I wrote my PROPS Conf file [ __auto__learned__ ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) INDEXED_EXTRACTIONS=psv TIME_FORMAT=%Y-%m-%d %H:%M:%S .%3N TIMESTAMP_FIELDS=TIimestamp
As i mentioned below prod column has multiple values and i want to split it based on \n next line command and get the output as mentioned in output image. Current data: Expected output: Th... See more...
As i mentioned below prod column has multiple values and i want to split it based on \n next line command and get the output as mentioned in output image. Current data: Expected output: Thanks in advance ..
Hey Everyone! I'm in need of some help, advice, Ouija board (lol)...whatever can do the trick. I am wanting to know if it is possible to consolidate data from a search that is not generated on Splun... See more...
Hey Everyone! I'm in need of some help, advice, Ouija board (lol)...whatever can do the trick. I am wanting to know if it is possible to consolidate data from a search that is not generated on Splunk? My supervisor is wanting to receive 1 report instead of 2. Do any of you know if this is even possible?  Thanks, Cyber_Nerd3
Is it possible to configure a 6.5.2 universal forwarder to send events to an http event collector (on 7.2)? I have a series of universal forwarders that had been sending logs to an old indexer on po... See more...
Is it possible to configure a 6.5.2 universal forwarder to send events to an http event collector (on 7.2)? I have a series of universal forwarders that had been sending logs to an old indexer on port 9997 -- both the forwarders and the indexer are slated for retirement "soon", part of an app that's been mostly retired already, but I need to keep them going a few more months.   The indexer hardware died badly, and I thought I'd easily be able to switch these UFs over to our current indexer, which runs 7.2 (upgrading soon to 8.x), but that indexer only listens using the HEC on port 8088.  It's behind an AWS ALB, so opening up port 9997 would be problematic. Is this even *supposed* to be possible (sending events from 6.5.2 UF to 7.2 HEC)? I've tried putting the following into local/outputs.conf, but it seems to have no impact.  Splunk isn't complaining about the statements when it starts up, and it also isn't sending any network traffic on port 8088.     [httpout] httpEventCollectorToken = [642bc63f-8e62-4b3e-9579-f146345eeaa2] uri = http://splunk.domain-name.com:8088 batchSize = 65536 batchTimeout = 5    
Hey, actually, I am facing an issue, forwarding data via Tcpout. My scope is to forwarding some data to the main indexer and a subset of the data with specific props.conf to another but additionall... See more...
Hey, actually, I am facing an issue, forwarding data via Tcpout. My scope is to forwarding some data to the main indexer and a subset of the data with specific props.conf to another but additionally keep the subset within the main indexer without using these additional props.conf setting.   Problem: Data is actual sent to both with using props.conf for both tcpout. sourcetype A  + sourcetype XXX ---> also using Props Props/Transforms (should be ignored) ---> Main Indexer sourcetype A ----> using Props/Transforms (required) --> Secondary Indexer   Scope: sourcetype A  + sourcetype XXX  ---> also using Props Props/Transforms ---> Main Indexer sourcetype A ----> Some Props/Transforms --> Secondary Indexer   Is there any solution to fix the problem?   Thank you for helping. Regards, Christoph
Hi, In my query: index="my_local" | sort -Date I get a list of items, and if I look at one item (and lick "show as raw text") it looks like this: {"Level":"Info","MessageTemplate":"ApiReque... See more...
Hi, In my query: index="my_local" | sort -Date I get a list of items, and if I look at one item (and lick "show as raw text") it looks like this: {"Level":"Info","MessageTemplate":"ApiRequest","RenderedMessage":"ApiRequest","Properties":{"httpMethod":"GET","statusCode":200}, ...} Since a lot of the properties are wrapped inside "Properties", I always have to expand it manually by clicking the expand icon (with plus sign). Is there any way to get the search results already expanded (so I don't always have to click "Properties" to manually expand it)? Many thanks!
Hi, I have below data in lookup,i need to add up the row data example: For first row i need to add total offw,total 'B',total 'V' and show the count in 3 different column for OFF,B and V. Similary... See more...
Hi, I have below data in lookup,i need to add up the row data example: For first row i need to add total offw,total 'B',total 'V' and show the count in 3 different column for OFF,B and V. Similary for each row i need add the same data value and show in a column   any query or commands ?
Hi,   I am attempting to create a search for a password spraying attempt. I need the IP address and Hostname made with the different login names attempted to login to a particular machine within th... See more...
Hi,   I am attempting to create a search for a password spraying attempt. I need the IP address and Hostname made with the different login names attempted to login to a particular machine within the last 5 min. Also, the number of login attempts should be more than 10.   I created the below search, but that's pulling me wrong data. A sample data am expecting is attached in the screenshot index=win* EventCode=4625 Logon_Type=3 Target_User_Name!="" src_ip!="-" |bucket span=5m _time |stats dc(TargetUserName) AS Unique_accounts values(TargetUserName) as tried_accounts by _time, src_ip Source_Workstation |eventstats avg(Unique_accounts) as global_avg, stdev(Unique_accounts) as global_std |eval upperBound=(comp_avg+comp_std*3) |eval isOutlier=if(Unique_accounts>10 and Unique_accounts>=upperBound, 1, 0) |sort -Unique_accounts   Thanks in advance.
Hi All,   I have an JSON file that is ingested into Splunk, I need to create a dashboard with the different API's and their traffic fields extracted are having the the timestamp as well and it mak... See more...
Hi All,   I have an JSON file that is ingested into Splunk, I need to create a dashboard with the different API's and their traffic fields extracted are having the the timestamp as well and it makes it difficult to be used directly for the dashboard queries data{}.fca-accounts-metrics-api-v1.08/23/2021.Number of Failure Traffic Below is the extract of the JSON content { "org": "xxx", "env": "prod", "from_date": "08/23/2021 00:00:00", "curr_date": "08/23/2021 23:59:59", "data": [ { "management-api-v1": { "08/23/2021": { "Total Number of Traffic": "0.0", "Average Request Turnaround(ms)": "0.0", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "0.0" } } }, { "sXXXX-api-v1": { "08/23/2021": { "Total Number of Traffic": "2113.0", "Average Request Turnaround(ms)": "57.68", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "2108.0" } } }, { "sXX-api-v1": { "08/23/2021": { "Total Number of Traffic": "0.0", "Average Request Turnaround(ms)": "0.0", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "0.0" } } }, { "open-banking-v31": { "08/23/2021": { "Total Number of Traffic": "0.0", "Average Request Turnaround(ms)": "0.0", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "0.0" } } }, { "fca-accounting-metrics-api-v1": { "08/23/2021": { "Total Number of Traffic": "135.0", "Average Request Turnaround(ms)": "57.66", "Number of Failure Traffic": "0.0", "Number of Success Traffic": "136.0" } } } ] }   Is there a way to extract the API names and the traffic details. if this could be made to a tabular form with the date and api names and its traffic details it would be great to create a dashboard or chart. 
I am trying to find the occurrence whenever the state changes due to the error. Below are my sample events: 2021/08/01 07:12:12.098 host=12345 In 2021/08/01 07:13:12.098 host=12345 In 2021/08/01 0... See more...
I am trying to find the occurrence whenever the state changes due to the error. Below are my sample events: 2021/08/01 07:12:12.098 host=12345 In 2021/08/01 07:13:12.098 host=12345 In 2021/08/01 07:14:12.098 host=12345 Out 2021/08/01 07:15:12.098 host=12345 Out 2021/08/01 07:16:12.098 host=12345 In 2021/08/01 07:17:12.098 host=12345 In 2021/08/01 07:18:12.098 host=12345 Out 2021/08/01 07:18:35.098 host=12345 ERROR 2021/08/01 07:19:12.098 host=12345 In 2021/08/01 07:20:12.098 host=12345 Out I need to group the events when the state (In/Out) changed due an ERROR event. For the above sample events, I should not get any result. Because, when the ERROR event happened, the host is already in "Out" stage. We need to monitor only when a "In" host changes to "Out" due to an ERROR. I tried the below search   index=myindex ("Cut-In" OR "Cut-Out" OR "ERROR") | rex "host=(?<host>\d+) (?<State>.*)" | transaction host startswith="State=In" endswith="Out" maxspan=24h | where searchmatch("ERROR") | table _time host   But the above query returns a result by grouping the "In" state which logged at "07:16:12" as start of the transaction and "07:20:12" as end of the transaction. This is not a valid scenario. Please help me in framing the logic.
[Updated] HI All, @ITWhisperer  Please help me on this I have data like below -  HostName LastConnected ABC 23/08/2021 10:04 ABC 23/08/2021 10:34 AAA 23/08/2021 12:01 AAA 23/... See more...
[Updated] HI All, @ITWhisperer  Please help me on this I have data like below -  HostName LastConnected ABC 23/08/2021 10:04 ABC 23/08/2021 10:34 AAA 23/08/2021 12:01 AAA 23/08/2021 12:32 AAA 23/08/2021 13:03 AAA 23/08/2021 13:34 ABC 23/08/2021 17:03 AAA 23/08/2021 15:01 AAA 23/08/2021 15:35 ABC 23/08/2021 14:00 AAA 23/08/2021 21:02 AAA 23/08/2021 22:03 AAA 23/08/2021 20:02 ABC 23/08/2021 11:02 ABC 23/08/2021 11:34 ABC 23/08/2021 12:02 ABC 23/08/2021 13:34 AAA 23/08/2021 14:02 AAA 23/08/2021 14:34 ABC 23/08/2021 15:04 ABC 23/08/2021 16:34 ABC 23/08/2021 16:05 ABC 23/08/2021 22:02 ABC 23/08/2021 23:36 AAA 23/08/2021 11:03 ABC 24/08/2021 11:36 AAA 24/08/2021 12:03 ABC 24/08/2021 11:00 AAA 24/08/2021 12:36 ABC 23/08/2021 17:36 AAA 23/08/2021 20:32 AAA 23/08/2021 21:32   Now, i want output like this    HostName TotalHours Max_Consecutive  23/08/2021 10 23/08/2021 11 23/08/2021 12 23/08/2021 13 23/08/2021 14 23/08/2021 15 23/08/2021 16 23/08/2021 17 23/08/2021 18 23/08/2021 19 23/08/2021 20 23/08/2021 21 23/08/2021 22 23/08/2021 23 24/08/2021 11 24/08/2021 12 24/08/2021 13 24/08/2021 14 24/08/2021 15 ABC 4 2 23/08/2021 10:04 23/08/2021 10:34 offline 23/08/2021 12:02 23/08/2021 13:34 23/08/2021 14:00 23/08/2021 15:04 23/08/2021 16:34 23/08/2021 16:05 23/08/2021 17:03 23/08/2021 17:34 offline offline offline offline 23/08/2021 22:02 23/08/2021 23:36 24/08/2021 11:36 24/08/2021 11:00 offline offline offline offline AAA 8 5 offline 23/08/2021 11:02 23/08/2021 11:34 23/08/2021 12:01 23/08/2021 12:32 23/08/2021 13:03 23/08/2021 13:34 23/08/2021 14:02 23/08/2021 14:34 23/08/2021 15:01 23/08/2021 15:35 offline offline offline offline 23/08/2021 20:02 23/08/2021 20:32 23/08/2021 21:02 23/08/2021 21:32 23/08/2021 22:03 offline offline 24/08/2021 12:03 24/08/2021 12:36 offline offline offline   Note:- I have more than 2 lakhs records, and if user select one week data it should be work for a week If it is connected complete hour ,then it is online - means two times in hour We if mvcount >=2 then it is online , we need to count If it is 1 - no need count keep as it is  0 - offlilne   Thank you in advance 
I am trying to export and import a dashboard using the Controller API, and using the Postman tool. Ref export, I have this working OK: I created an API Client with the administrator role I used t... See more...
I am trying to export and import a dashboard using the Controller API, and using the Postman tool. Ref export, I have this working OK: I created an API Client with the administrator role I used the Controller API with the API Client credentials to generate a Bearer Token, https://{{controller_uri}}//controller/api/oauth/access_token I successfully exported a dashboard https://{{controller_uri}}/controller/CustomDashboardImportExportServlet?dashboardId=12355 Headers: Authorization:Bearer {{bearer_token}} Now when I try to import a dashboard using the API, with: A new dashboard name that currently doesn’t exist, Using basic authentication (my user account which also has admin access) because the import API does not support the use of the Bearer Token (open enhancement exists: Internal story ID https://jira.corp.appdynamics.com/browse/METADATA-9305). .. I simply get a 500 response. What I tried is: Method: POST URI: https://{{controller_uri}}/controller/CustomDashboardImportExportServlet BODY: The json of the previously exported dashboard Content-Type: application/json As per the documentation, I also tried using CURL and it worked: https://docs.appdynamics.com/4.5.x/en/extend-appdynamics/appdynamics-apis/configuration-import-and-export-api curl -X POST --user Allister.Green@RSAGroup:<pw> https://<domain uri>/controller/CustomDashboardImportExportServlet -F file=@dashboard.json Because the curl example uses a file, I also tried using a file with Postman instead of using the dashboard json as the message body, but this also generated a 500 response. To use a file: Body: form-data KEY: file, VALUE: <filename>, CONTENT TYPE application/json Has anyone got dashboard imports working using Postman, and if so, please can you share how. Thanks, Allister.
Hi, After we upgrade our Splunk version to 8.2.1, Under the monitoring console `overview` tab, the chart under `CPU usage by process` and `Memory Usage by process` are empty. However, they do app... See more...
Hi, After we upgrade our Splunk version to 8.2.1, Under the monitoring console `overview` tab, the chart under `CPU usage by process` and `Memory Usage by process` are empty. However, they do appear under the `Resource Usage: Instance` tab. Is anyone able to help why the chart is empty under the `overview` tab of the monitoring console?  
Hello In my base search I'm looking for stores with the minimum count of 1 for 4 differend kind of errors. I count the errors, put them in a xyseries table and filter them out - which works great. ... See more...
Hello In my base search I'm looking for stores with the minimum count of 1 for 4 differend kind of errors. I count the errors, put them in a xyseries table and filter them out - which works great. Now i would like to know which stores on which day hit all the criterias. -----------------------------------                 Code ----------------------------------- index=main host=* (thrown NotFoundException:Not found) OR (X-30056) OR (Interceptor for tx_pool ITransactionPool has thrown exception, unwinding now) OR (SocketTimeoutException Read Timeout) | rex field=_raw "An accepted error occurred:.(?<exception>\w+-\d+):." | rex field=_raw "SocketTimeoutException: R(?<exception>\w+.\w+)" | rex field=_raw "serverDataState:.(?<exception>\w+.\w+)" | rex field=_raw "Caused by: java.io.InterruptedIOException:.(?<exception>.*)" | rex field=_raw "thrown NotFoundException:(?<exception>\w+.\w+)" | eval ccc = cooperative+cost_center | stats count by ccc exception | xyseries ccc exception count | search X-30056 > 0 AND "Read Timeout" > 0 AND "Not found" > 0 AND "Output operation aborted" > 0 -----------------------------------                Result ----------------------------------- ccc X-30056 Not found Output operation aborted Read Timeout Read Timeout Read timed 0011111 339 6 12 193 364 0022222 620 4 1 640 992 1 0033333 588 4 7 2549 4956 1 What I would like to achieve is the following: Date                 ccc 08/17/2021 0011111 08/18/2021 0022222 08/20/2021 0033333 I'm thankful for any help!
Hi, I have the following SPL as a dashboard panel which shows realtime searches. This is so I can contact the owners and discuss them converting to a scheduled report instead: | rest /services/searc... See more...
Hi, I have the following SPL as a dashboard panel which shows realtime searches. This is so I can contact the owners and discuss them converting to a scheduled report instead: | rest /services/search/jobs | search eventSorting=realtime | eval author=upper(author) | lookup snow_sys_user_list.csv user_name as author | table author label eventSearch dv_name dispatchState, eai:acl.owner, isRealTimeSearch, performance.dispatch.stream.local.duration_secs, runDuration, searchProviders, splunk_server However, the panel is still showing reports that have been converted to scheduled reports/alerts or deleted entirely. Is there some SPL I have to add to get it to only see "active" real-time searches? Thanks      
Hello, I have updated a Splunk Cluster from V7.3.8 to V8.1.2 following the documentation provided by Splunk and since the update we have an issue with the scheduled searches Schedule searches are r... See more...
Hello, I have updated a Splunk Cluster from V7.3.8 to V8.1.2 following the documentation provided by Splunk and since the update we have an issue with the scheduled searches Schedule searches are running normally after a Searchhead Cluster restart but after some time they skipping  on the Capitan and and they do not run at all on the other nodes In the screenshots above Scheduled Searches where running until  8 AM CET and then all are skipped on the Captain and the other Search Heads did not process any Scheduled Searches. I found a workaround to move the Captain to another SearchHead and then Schedules Searches will run again.  As seen in the example above The cluster is composed of 3 Indexers, 3 SearchHeads and 1 Master node I have increased the Relative concurrency limit for scheduled searches to 70% and Relative concurrency limit for scheduled searches to the same 70% Also adapted the limits.conf to  # The base number of concurrent searches. base_max_searches = 60 # Max real-time searches = max_rt_search_multiplier x max historical searches. # max_rt_search_multiplier = 1 # The maximum number of concurrent searches per CPU. max_searches_per_cpu = 10 max_searches_perc = 60 But nothing helps   A sure way to reproduce this on the system is to stop one of the SearchHeads and then start it. Aprox 10 Minutes after the SearchHead starts all scheduled searches will be skipped on the Capitan   In the Logs there is only one type of "Error" (actually info message) :  _ACCELERATE_AF2AEFDE-8E13-4DCA-90CB-C21D356D9A60_iqpress_nobody_e0c3b6f1a41c2518_ACCELERATE_ The maximum number of concurrent historical scheduled searches on this cluster has been reached (220)   Thank you very much in advance