All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a working example out there for ingesting metrics from a CSV file without headers using search-time extraction?   Can't get it working when NOT using INDEXED_EXTRACTIONS = csv  
Hi all, I have installed Splunk_TA_nix and I setup the configuration files and saved it and restarted Splunk, and I try to go to the actual app but it keeps redirecting me to the setup URL. /Splunk... See more...
Hi all, I have installed Splunk_TA_nix and I setup the configuration files and saved it and restarted Splunk, and I try to go to the actual app but it keeps redirecting me to the setup URL. /Splunk_TA_nix/setup Any ideas?
Hi, After update to last version 20.6, I can't add a URL to monitor. Each time I try it, the next message appear in screen and in logs. I mod to many shapes the URL but the same result. I believ... See more...
Hi, After update to last version 20.6, I can't add a URL to monitor. Each time I try it, the next message appear in screen and in logs. I mod to many shapes the URL but the same result. I believe it is a bug. Exist a manner to disable the validation o fix, please. Screenshot Logs: [#|2020-06-09T01:01:49.015-0500|WARNING|glassfish 4.1|com.appdynamics.sim.controller.biz.sam.SamTargetHttpValidator|_ThreadID=4770;_ThreadName=http-listener-1(3);_TimeMillis=1591682509015;_LevelValue=900;|ID000381 Specified target host v1plside01 is a site local address. Site local address is not allowed as host name|#] [#|2020-06-09T01:01:49.016-0500|WARNING|glassfish 4.1|com.appdynamics.SIM|_ThreadID=4770;_ThreadName=http-listener-1(3);_TimeMillis=1591682509016;_LevelValue=900;|Specified target host v1plside01 is a site local address. Site local address is not allowed as host name|#]
Hello All I wanted to understand one use case with Sonarqube which is a code scanning tool that has a different setup. Can we integrate with Appdynamics for one point of monitoring on this area wi... See more...
Hello All I wanted to understand one use case with Sonarqube which is a code scanning tool that has a different setup. Can we integrate with Appdynamics for one point of monitoring on this area with Scan logs, Errors, alerts and dashboards? Thanks Gaurav
Hi, I'm pushing some data to AppDynamics using CURLs. I've noticed that when I don't explicitly send the eventTimestamp in the request, the business journey does not pick up the message. But I can... See more...
Hi, I'm pushing some data to AppDynamics using CURLs. I've noticed that when I don't explicitly send the eventTimestamp in the request, the business journey does not pick up the message. But I can found the message in the schema, and the eventTimestamp is equal, in these cases, to the pickupTimestamp. Has anyone experienced this issue, or knows why this happens?  Thank you, Ricardo Saraiva
Hello all, I can't figure out how to build a lookup with a condition. I have the following table which is my base search: SubnetName ip_address Subnet_ABCD 10.177.99.53 Subnet_1234 10.... See more...
Hello all, I can't figure out how to build a lookup with a condition. I have the following table which is my base search: SubnetName ip_address Subnet_ABCD 10.177.99.53 Subnet_1234 10.8.183.3 Subnet_1234 10.8.182.233 Subnet_ABCD 10.177.83.244 And the following lookup table: Last_SubnetName SubnetID NetStart NetEnd Subnet_A 10.177.0.0/16 10.177.0.1 10.177.255.254 Subnet_B 10.8.0.0/16 10.8.0.1 10.8.255.254 Subnet_B 192.16.0.0/24 192.168.0.1 192.168.0.254 This is the closest I got after reading several articles, but as you can see, I got no luck. The result is simply blank every time I try it. index=mybasesearch ( [| inputlookup myLookupTable.csv | table Last_SubnetName,SubnetID,NetStart,NetEnd ] AND last_ip_address >=NetStart AND last_ip_address<=NetEnd) Need your help to proceed.
Hi all, I've been struggling to extract certain values from application logs and assign them to the given field name. As I don't know how to use or write regular expression in splunk, I need help ... See more...
Hi all, I've been struggling to extract certain values from application logs and assign them to the given field name. As I don't know how to use or write regular expression in splunk, I need help to write a query to get the desired output. Here is my base search query: https://www.myapplication.com/myapi/version5/autofill/ "ERROR" here is the output log: "ERROR" "store.view.app.api.controller.myClientLoggingController" "viewhost02" "myview2_2" <> "catalina-exec-7" "requestId=d4s6666-9d6e-2c0g-7c20-6e9f7wfa7f6" <> "clientIp=234.234.234.22" "store.view.app.api.controller.myClientLoggingController.logError(?:?):My-AngularApp xxxxxxxxxxxxxxxxxxxxxxxxxxxxxcxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx NOTE: in above log I have replaced the brackets <> with quotes "" Now I want to extract the "requestId", "clientIp" and "My-AngularApp" and assign them to field name as "Req_ID", "Cust_IP" and "App_Name" respectively. Can someone please help with the query to achieve the desired output, as I always struggle with REX syntax and can't write the query by my own. Thank you in advance.
I've been dealing with this TailReader error for a while and was not able to fix it despite reading all answers and similar questions. I'm still experiencing data loss every day. As you can see in ... See more...
I've been dealing with this TailReader error for a while and was not able to fix it despite reading all answers and similar questions. I'm still experiencing data loss every day. As you can see in below .confs I already disabled indexed_extraction since the universal forwarder doesn't extract field at index time, but still getting that error. I was told to migrate to a heavy forwarder, but I prefer to solve it on UF if possible. I appreciate any help. inputs.conf [monitor:///home/audit/oracle/*/v1[12]*.log] disabled = 0 index = ora sourcetype = oracle:audit:json blacklist = (ERROR|lost|ORA|#|DONE) crcSalt = initCrcLength = 1000 ignoreOlderThan = 4h alwaysOpenFile = 1 interval = 30 props.conf [oracle:audit:json] DATETIME_CONFIG = CURRENT #INDEXED_EXTRACTIONS = JSON KV_MODE = none MAX_EVENTS = 5 TRUNCATE = 0 TRANSFORMS-TCP_ROUTING_GNCS = TCP_ROUTING_GNCS TRANSFORMS-hostoverride = hostoverride TRANSFORMS-HOST_JSON = HOST_JSON TRANSFORMS-sourcetype_json11 = sourcetype_json11 TRANSFORMS-sourcetype_json12 = sourcetype_json12 TRANSFORMS-sourcetype_sql11 = sourcetype_sql11 TRANSFORMS-sourcetype_sql12 = sourcetype_sql12
I have a link list with three tabs (A, B, and C). When A is clicked three panels open (X, Y, and Z) and one drill-down (that doesn't show values unless one of the panels (X, Y, or Z) is clicked on). ... See more...
I have a link list with three tabs (A, B, and C). When A is clicked three panels open (X, Y, and Z) and one drill-down (that doesn't show values unless one of the panels (X, Y, or Z) is clicked on). How do I get the drilldown to automatically be filled with values for the X panel? So, it would be: When A is clicked, I have X, Y, Z, and drilldown, with X values open at the same time. Very appreciated! Thank you.
Is there a way to delete a directory in the /bin directory of my app during the upgrade process? I have an app that contains the /splunklib in the /bin directory, to be compliant with app inspec... See more...
Is there a way to delete a directory in the /bin directory of my app during the upgrade process? I have an app that contains the /splunklib in the /bin directory, to be compliant with app inspect I have moved it to /lib. When I install the new version of my app with the upgrade option selected the existing /bin/splunklib directory still remains. After installation of the new version of my app there are now two copies of the splunklib, one in /bin and one in /lib directory. So far the way I have been able to resolve the issue is to delete my app using: ./splunk remove app [appname] -auth : Then install the new version of my app. I would like my app upgrade process to take care of the work rather than requiring command line access to the splunk server.
TailReader - Insufficient permissions - errors in my logs - will splunk attempt to re-read those at some interval? thus far I only see it doing it once a few hours back and not since I also see ... See more...
TailReader - Insufficient permissions - errors in my logs - will splunk attempt to re-read those at some interval? thus far I only see it doing it once a few hours back and not since I also see several databaseDirectory events in the splunkd log that relates to the index that these logs should of went to so I'm not sure whats going on, perhaps just a delay? 00 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 19:43:49.481 +0000 INFO HotBucketRoller - finished moving hot to warm bid=kinesis~20~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF idx=kinesis from=hot_v1_20 to=db_1590613020_1589312100_20 size=956243968 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=1015918592 (968MB) 06-04-2020 19:43:49.483 +0000 INFO IndexWriter - Creating hot bucket=hot_v1_21, idx=kinesis, event timestamp=1590429480, reason="suitable bucket not found, number of hot buckets=1, max=3; closest bucket localid=0, earliest=1577836800, latest=1577836800" 06-04-2020 19:43:49.484 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Adding bucket, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 19:43:49.485 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 19:44:15.461 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 06-04-2020 19:44:15.463 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 19:44:16.399 +0000 INFO IndexerIf - Asked to add or update bucket manifest values, bid=kinesis~20~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF 06-04-2020 19:44:16.454 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=1 . Reason='Updating manifest: bucketUpdates=1' 06-04-2020 19:44:16.458 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:02.413 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Updating bucket, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 20:22:02.415 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:02.417 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Updating bucket, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 20:22:02.418 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:02.419 +0000 INFO HotBucketRoller - finished moving hot to warm bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF idx=kinesis from=hot_v1_21 to=db_1590613020_1589312100_21 size=789688320 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=789729280 (753MB) 06-04-2020 20:22:14.438 +0000 INFO IndexWriter - Creating hot bucket=hot_v1_22, idx=kinesis, event timestamp=1590605700, reason="suitable bucket not found, number of hot buckets=1, max=3; closest bucket localid=0, earliest=1577836800, latest=1577836800" 06-04-2020 20:22:14.439 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Adding bucket, bid=kinesis~22~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF' 06-04-2020 20:22:14.440 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:22:18.375 +0000 INFO IndexerIf - Asked to add or update bucket manifest values, bid=kinesis~21~BC057F8A-75D0-4CDC-9BD0-EA5E0076B4AF 06-04-2020 20:22:18.455 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=1 . Reason='Updating manifest: bucketUpdates=1' 06-04-2020 20:22:18.457 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db 06-04-2020 20:23:15.459 +0000 INFO DatabaseDirectoryManager - idx=kinesis Writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/kinesis/db', pendingBucketUpdates=0 . Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 06-04-2020 20:23:15.460 +0000 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/kinesis/db
Hello, I'm new to Splunk, so please pardon me if this is too easy of a question. I'm trying to list attempted operation vs. passed operation and categorize it by apps. Below is the search that I h... See more...
Hello, I'm new to Splunk, so please pardon me if this is too easy of a question. I'm trying to list attempted operation vs. passed operation and categorize it by apps. Below is the search that I have: index="cts-test-app" source=*PERF* | rex "DN: (?<ConsumingApp>.*?)[}\s]" | stats count(eval(searchmatch("GET /Refid"))) AS "Attempted" count(eval(searchmatch("POST /refid"))) AS "Passed" Now, for both operations, there could be another string indicator. Essentially I want to insert OR operation, something like this: index="cts-test-app" source=*PERF* | rex "DN: (?<ConsumingApp>.*?)[}\s]" | stats count(eval(searchmatch(**"GET /Refid" OR "GET /SomeId"**))) AS "Attempted" count(eval(searchmatch(**"POST /refid" OR "POST /SomeId"**))) AS "Passed" Is there a way to do this with searchmatch ? If not, can this search be rewritten in a way that would achieve this objective? Any help will be much appreciated.
Greetings! I have a scheduled rule that runs every closed minute and it matched an event at 1:30:03PM which was supposed to send an email but it hasn't. What could be the cause of this? Any ... See more...
Greetings! I have a scheduled rule that runs every closed minute and it matched an event at 1:30:03PM which was supposed to send an email but it hasn't. What could be the cause of this? Any suggestions will be appreciated
I have custom content that I've created in SSE and mapped to various parts of the MITRE Framework. The problem is SSE only seems to be picking up Splunk ES and ESCU content, not the custom stuff I've... See more...
I have custom content that I've created in SSE and mapped to various parts of the MITRE Framework. The problem is SSE only seems to be picking up Splunk ES and ESCU content, not the custom stuff I've done. Is there a solution for this?
I have a use case to write a splunk query to display in a line or area chart the unique and initial AWS access key usage by IAM users in our org trending for the past year. Management want to be able... See more...
I have a use case to write a splunk query to display in a line or area chart the unique and initial AWS access key usage by IAM users in our org trending for the past year. Management want to be able to visually show increased cloud adoption numbers over time. Any ideas on how to display this? I feel like I'm almost there with stats but not quite. index=blah sourcetype=blah user_type=SAMLuser | stats earliest(eventTime) by userIdentity.userName This almost gets me there, but it won't depict the stats in a pretty line chart. Thanks!
Hi, We have a report generating data on first day of each month and also on first day of each week. We need to get the data of first day of each month. We have the query as below |eval assetC... See more...
Hi, We have a report generating data on first day of each month and also on first day of each week. We need to get the data of first day of each month. We have the query as below |eval assetCount=tonumber(substr(Message,42))| eval month = strftime(_time, "%m") |stats max(assetCount) as "Total Count" by month|sort month desc But, this gives the last data of each month. Can you please help in getting the first data of each month. That is the report generated on the first day of each month.
I want to include a value from a lookup table in search results, by using a field value from the main search.
The certificate has hostname.domain.local and the scheduled reports are coming out with hostname:port/PathToReport minus the domain.local . I have checked the etc/system/local/server.conf and ... See more...
The certificate has hostname.domain.local and the scheduled reports are coming out with hostname:port/PathToReport minus the domain.local . I have checked the etc/system/local/server.conf and it has the fully qualified domain name in there, but it is not being input to the report links.
Hello, I am using Sunburst Viz for one of my charts. When I choose "Zoom in" as an action, I can only see 2 layers initially. When I click on anything it zooms into more layers. Can I increase t... See more...
Hello, I am using Sunburst Viz for one of my charts. When I choose "Zoom in" as an action, I can only see 2 layers initially. When I click on anything it zooms into more layers. Can I increase the number of layers I see initially? Ex: I want to see 3 layers initially. Thanks in advance!!
Hi  We are looking to monitor DMZ servers in a SAAS controller. How can we monitor them? Do we have any documentation or any parameters we need to Add in Agent startup scripts and all? ^ Edited... See more...
Hi  We are looking to monitor DMZ servers in a SAAS controller. How can we monitor them? Do we have any documentation or any parameters we need to Add in Agent startup scripts and all? ^ Edited by @Ryan.Paredez to clean up the post title.