All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I have a route that has all the logs, but in it there are several types of logs, I only need some that start with a certain name. Apple banana mango dns.log dns_1123.log dns3_1.log if I nee... See more...
  I have a route that has all the logs, but in it there are several types of logs, I only need some that start with a certain name. Apple banana mango dns.log dns_1123.log dns3_1.log if I need the log that starts with dns and ends in anything else I understand that I can use dns * [monitor:///folder1/folder2/folder3/folder/logs/dns*] disabled = false host = 10.10.10.10 index = myindex sourcetype = mysourcetype   But when I check the logs that are being indexed, all the logs are arriving, even the ones that I don't need, how else can I make only the ones that start with dns arrive and not the other logs?
Hi All,  looking for some assistance on what a regex would look like when every new line starts with an open bracket e.g. ( I am a complete novice with regex so asking how this would be achieved.  I... See more...
Hi All,  looking for some assistance on what a regex would look like when every new line starts with an open bracket e.g. ( I am a complete novice with regex so asking how this would be achieved.  I kinda understand the error - just not how to resolve. my error is (from btool.log) btool-support - Bad regex value: '([\r\n]+)\s*('', of param: props.conf / [<sourcetype] / LINE_BREAKER; why: missing closing parenthesis   Many thanks
Hi guys, I'm hoping for a bit of a help. My total_bytes and src_zone aren't populating. I tried few things at groupby stage both in stats and tstats, At this point I'm running out of ideas on how t... See more...
Hi guys, I'm hoping for a bit of a help. My total_bytes and src_zone aren't populating. I tried few things at groupby stage both in stats and tstats, At this point I'm running out of ideas on how to fix it. Can you please have a look at it? | tstats summariesonly=t prestats=t latest(_time) as _time values(All_Traffic.user) as All_Traffic.user, values(All_Traffic.dest_translated_ip) as All_Traffic.dest_translated_ip, values(All_Traffic.rule) as All_Traffic.rule, values(All_Traffic.src_zone) as All_Traffic.src_zone values(All_Traffic.dest_zone) as All_Traffic.dest_zone, values sum(All_Traffic.bytes) AS All_Traffic.bytes values(sourcetype) as sourcetype count from datamodel=Network_Traffic where (nodename = All_Traffic) NOT (index="zscaler") All_Traffic.src_ip="10.24.224.12" All_Traffic.dest_ip="213.52.102.12" groupby All_Traffic.src_ip All_Traffic.dest_ip All_Traffic.action All_Traffic.app | tstats summariesonly=t prestats=t append=t latest(_time) as _time values(Web.user) as Web.user, sum(Web.bytes) AS Web.bytes values(sourcetype) as sourcetype count from datamodel=Web where (nodename = Web) Web.src="10.24.224.12" Web.dest_ip="213.52.102.12" groupby Web.src Web.dest_ip Web.action Web.url Web.app | eval src=case(isnotnull('All_Traffic.src_ip'), 'All_Traffic.src_ip', isnotnull('Web.src'), 'Web.src') | eval dest=case(isnotnull('All_Traffic.dest_ip'), 'All_Traffic.dest_ip', isnotnull('Web.dest_ip'), 'Web.dest_ip') | eval action=case(isnotnull('All_Traffic.action'), 'All_Traffic.action', isnotnull('Web.action'), 'Web.action') | eval All_Traffic_url="N/A" | eval app=case(isnotnull('All_Traffic.app'), 'All_Traffic.app', isnotnull('Web.app'), 'Web.app') | stats latest(_time) as _time values(All_Traffic_url) as All_Traffic_url values(All_Traffic.app) as All_Traffic.app values(Web.app) as Web.app values(Web.user) as Web.user, values(All_Traffic.user) as All_Traffic.user, values(All_Traffic.dest_translated_ip) as dest_translated_ip, values(All_Traffic.dest_zone) as dest_zone, values(All_Traffic.src_zone) as src_zone values(All_Traffic.rule) as rule, sum(All_Traffic.bytes) AS All_Traffic.bytes, sum(Web.bytes) AS Web.bytes, values(sourcetype) as sourcetype, count by src dest action app Web.url | eval user=case(isnotnull('All_Traffic.user'), 'All_Traffic.user', isnotnull('Web.user'), 'Web.user') | eval url=case(isnotnull('Web.url'), 'Web.url', isnotnull(All_Traffic_url), All_Traffic_url) | stats latest(_time) as _time values(user) as user, values(All_Traffic.dest_translated_ip) as dest_translated_ip, values(All_Traffic.src_zone) as src_zone, values(All_Traffic.dest_zone) as dest_zone, values(All_Traffic.rule) as rule, sum(All_Traffic.bytes) AS All_Traffic.bytes, sum(Web.bytes) AS Web.bytes, values(sourcetype) as sourcetype count by src dest action app url | fillnull value=0 All_Traffic.bytes, Web.bytes | eval total_bytes='All_Traffic.bytes'+'Web.bytes' | eval total_bytes=tostring(total_bytes/1024/1024, "commas") + " MB" | fillnull value="N/A" src, src_zone, dest, dest_dns, dest_translated_ip_dns, dest_zone, action, app, rule, user | fields _time, sourcetype src, src_zone, dest, dest_dns, dest_translated_ip_dns, dest_zone, action, app, rule, user, total_bytes, count, url   Thanks in advance! 
Hello Splunkers, Can we forward events from 1 indexer cluster to other indexer cluster?? We are supposed to fetch events from servers store it at our indexer cluster while need to forward it to cus... See more...
Hello Splunkers, Can we forward events from 1 indexer cluster to other indexer cluster?? We are supposed to fetch events from servers store it at our indexer cluster while need to forward it to customers indexers cluster. Customer doesn't want us to mirror events by using HF. So can we do same by enabling HF configurations at every Indexers of our cluster? I think this can be done using configurations provided routing and filter docs. Still need to confirm from experts whether will there be any issues with such approach. Another aspect is suppose I want any particular event under index A at our indexer and customer needs same event under index B at their side, where we will forward them unindexed and they may index it at their indexers, is this possible? I'm hoping there won't be any sequencing issues as well when multiple indexers would be forwarding events since timestamp would be appropriate. We are Ok with 2x of license consumption...may be will drop events at our side the ones which are not needed. PS: I'm new with Splunk and trying to understand how can we use at best to us and for customer.  
Hi there, I want to search events for example A =B*xy Where B is another field name with different values depending on user input. * is the wildcard So. I'm looking for events where A would ... See more...
Hi there, I want to search events for example A =B*xy Where B is another field name with different values depending on user input. * is the wildcard So. I'm looking for events where A would be NYabxy, NYccxy, etc. Here value of B is NY How would I do search syntax? This doesn't work: | search A=B*xy as here B is considered a string not a field name. would  "where" be better alternative? The main point of me doing this I wanna make my search more efficient as I want Splunk to only search events where A=NY*xy is applicable instead of searching every event with A field. 
Hi All, Our data ingested into our Index are in proper JSON format & Splunk is converting into JSON object automatically  , but I'm unable to extract/access any of  the child object along with its n... See more...
Hi All, Our data ingested into our Index are in proper JSON format & Splunk is converting into JSON object automatically  , but I'm unable to extract/access any of  the child object along with its nested attributes. Example : { "name": "notificationService", "requestId": "ee76e5cf-90cc-521f-bc96-bdf6f39f5bc8", "parsedEvent": {                 "event": {                                     "eventName": "Notification",                                     "timestamp": "2020-11-26T18:55:14.000+11:00",                                     "delivery": "NotifierSystem",                                     "notificationEventType": "EMAIL"                               },  "metadata": {                     "correlationId": "1603246877854"                        } }, "msg": "Starting to process event", "time": "2020-11-26T08:02:39.123Z" }   Search query :    .... | table  requestId, name, parsedEvent.metadata.correlationId, parsedEvent.event Is getting me : ee76e5cf-90cc-521f-bc96-bdf6f39f5bc8,notificationService, 1603246877854,<blank> If you see, I'm not able to get the  "parsedEvent.event" object along with its child attributes, but i'm able to access "parsedEvent.metadata.correlationId"   successfully which doesn't have any child.  Any help would be appreciated.  Thanks in Advance.  
I have two indexes: INDEX1 and INDEX2. In these indexes have the same fields: FIELD1, FIELD2, FIELD3 but they can have different values.  For example: INDEX1: FIELD1=5, FIELD2=8 INDEX1: FIELD1=5, ... See more...
I have two indexes: INDEX1 and INDEX2. In these indexes have the same fields: FIELD1, FIELD2, FIELD3 but they can have different values.  For example: INDEX1: FIELD1=5, FIELD2=8 INDEX1: FIELD1=5, FIELD2=7 I need to get a table where will be show only fields with different values in different indexes. According the previous example: |INDEX1|FIELD2=8| |INDEX2|FIELD2=7| or something similar
Hello, I was wondering if the title is possible, injesting only specific strings or regex that match onto Splunk Regards
I am having trouble on some monitored CSV's that get refreshed daily.  There are 5 CSV's in a common directory that I have a monitored input configured for.  The maximum columns for the 5 CSV's is 68... See more...
I am having trouble on some monitored CSV's that get refreshed daily.  There are 5 CSV's in a common directory that I have a monitored input configured for.  The maximum columns for the 5 CSV's is 68 columns.  The file sizes are typically 1.5MB to 2MB with one file being 22MB.  The largest number of rows in one particular file is roughly 39000 rows with the smallest being 1500 rows.  I have noted that SPLUNK will ingest the CSV but caps out at around the 150th record for each file.  For the 22MB file (which has 39000 rows) it doesn't seem to read the file (this may be due to some columns that look weird so may be bombing out).  Checked limits.conf for column number for ingest and it is configured at around 200 (due to a reason I thought I might need it later down the track) but even with the default limits.conf, the number of columns is not exceeding the default value.
I`ve just installed the wiindows app for windows infrastructure and it addons and when I run the prequisite test, it fails, as it finds no events when looking for sourcetype="ActiveDirectory*". I se... See more...
I`ve just installed the wiindows app for windows infrastructure and it addons and when I run the prequisite test, it fails, as it finds no events when looking for sourcetype="ActiveDirectory*". I searched the entire AddOn, and couldn't find any reference to that sourcetype anywhere. Could you help me out? What can this be? Perhaps en error with the TA?
Afternoon Team, Was hoping to get some assistance with building an alert in Splunk Enterprise Security that can pick up on old aws keys that were created in the past at a certain point - potentially... See more...
Afternoon Team, Was hoping to get some assistance with building an alert in Splunk Enterprise Security that can pick up on old aws keys that were created in the past at a certain point - potentially 6 months for example. The below is a preliminary alert that im building but could use some guidance.  We are ingesting events from AWS cloudtrail. index="aws-cloud-trail" responseElements.accessKey.accessKeyId=AKIA* | spath eventName | search eventName=CreateAccessKey userIdentity.type!=AssumedRole | rename responseElements.accessKey.createDate as creationdate | eval creationdate=strptime(creationdate, "%H:%M.%S %p, %a %m/%d/%Y") | where creationdate < (now() - (86400 * 30)) | table sourceIPAddress userName src_user userIdentity.type userAgent action status creationdate responseElements.accessKey.status responseElements.accessKey.accessKeyId Has anyone else come across something similar?    Be great to hear from you! Happy to lend more context if needed.
I have 2 indexes: index1 and index2. I need to compare values in both indexes and show only differences in fields. In both indexes always exist the same field - time, so I need match by this field. ... See more...
I have 2 indexes: index1 and index2. I need to compare values in both indexes and show only differences in fields. In both indexes always exist the same field - time, so I need match by this field. For example: Index1 Time - 26.11.20 Field1 - xxxx Field2 - xxxx and Index2 Time - 26.11.20 Field1 - xxxx Field2 - xxxy I want to get a table: Field2 - xxxx Field2 - xxxy
I have these as the final lines of my bash script: response=$(curl -H "Authorization: Bearer $access_token" -H "Accept: application/json;odata=verbose" -s "$url") echo "$response" echo "Test1" Th... See more...
I have these as the final lines of my bash script: response=$(curl -H "Authorization: Bearer $access_token" -H "Accept: application/json;odata=verbose" -s "$url") echo "$response" echo "Test1" The script runs, however only 'Test 1' is sent to the index/splunk. My response, which I know returns the response of the curl command, seems to be being ignored. The only reason I can think for this is that it's too large a body? The response is in json but is quite large, I'd say pages worth.
Hi We are seeing a long lag for our forwarders to send in data to Splunk - up to 4 hours!!!   When we run this command we can see the output with a high max_lag in seconds. We are monitoring a fi... See more...
Hi We are seeing a long lag for our forwarders to send in data to Splunk - up to 4 hours!!!   When we run this command we can see the output with a high max_lag in seconds. We are monitoring a file directory with lots and lots of files (100,000+) we are wondering if this could be the issue and is there some way to know from the forwarder it cant keep up? Or is there another solution? We are testing this prop now, but we are unsure if it will help, as we are unsure if it is the issue? ignoreOlderThan = 1d           index=* host = TEST_CLUSTER1 sourcetype!=G1 | eval lag_sec=_indextime-_time | stats max(lag_sec) as max_lag max(_indextime) as max_index_time max(_time) as max_event_time by sourcetype host source | addinfo | eval index_lag_for_search = info_search_time - max_index_time | eval event_lag_for_search = info_search_time - max_event_time | sort - max_lag | table sourcetype host source max_lag info_search_time info_min_time info_max_time, max_event_time, max_index_time, , index_lag_for_search, event_lag_for_search           The image below shows the slowness of some of the files.  
Hello, I am stuck, this error message keeps appearing, so I cannot run any searches, they just get queued up. It has rendered my Splunk instance as unusable?    I was thinking I may have too many ... See more...
Hello, I am stuck, this error message keeps appearing, so I cannot run any searches, they just get queued up. It has rendered my Splunk instance as unusable?    I was thinking I may have too many searches running in the background due to tutorial zip files loaded? I cannot delete the tutorial files as  cannot run any searches?   And does this also stop me from seeing csv and zip files I have recently successfully uploaded, as I cannot see them?? Any help would be great, thanks a lot.
Hi Splunkers, We have servers running on 2 different on premises Data Centre. We are planning Splunk deployment architecture to fetch the events from server. We are planning to have 1 indexer at e... See more...
Hi Splunkers, We have servers running on 2 different on premises Data Centre. We are planning Splunk deployment architecture to fetch the events from server. We are planning to have 1 indexer at each Data Centre and making a clustering of this 2 indexers. Reason for clustering is suppose 1 indexer goes down then the other one can keep on fetching events from the servers of other DC with no downtime with these aspects. And clustering will also help us to use 1 search head for both DCs as we dont have many users for monitoring. Both Data centres are connected to each other with some dedicated LAN channels.... Both indexers would be under same virtual network. Will this be as simple as single site cluster??? If I configure them as single site cluster would there be any architectural issues or I'm going in right way? Or will their be a concept multi site needs to be implemented?      
Hi,  I've configured an Alert to be sent to Email and AWS SNS. My query usually finds multiple results, when an alert gets sent to email, it's able to see all the results. However, when the alert i... See more...
Hi,  I've configured an Alert to be sent to Email and AWS SNS. My query usually finds multiple results, when an alert gets sent to email, it's able to see all the results. However, when the alert is sent to AWS SNS, it only sees a single event (most likely the first one). I've looked at https://docs.splunk.com/Documentation/AddOns/released/AWS/ModularAlert but can't see anything related to enabling sending of all events. Anyone have a clue? The SNS alert looks like the following:  
Hi, I've following issue: Ive a dataset containing data like Order number = 12345 Description = "AB: jdkjsd" planned_date="12.3.2020" Order number = 12346 Description = "BC: jdkjsd" planned_da... See more...
Hi, I've following issue: Ive a dataset containing data like Order number = 12345 Description = "AB: jdkjsd" planned_date="12.3.2020" Order number = 12346 Description = "BC: jdkjsd" planned_date="12.3.2020" Order number = 12347 Description = "BA: jdkjsd" planned_date="12.3.2020"   now I'd like to have a table which counts me the number of events for "BC:*", "AB:*" OR "BA:*",... and so on - I'm quite new and google didnt helped me, can someone help? Thanks!
I am using Splunk 8.1.0 with the Sankey Diagram 1.5.0 app. My query for the sankey diagram conditionally, based on a form input, excludes zero values. I would like to conditionally—say, based on a ... See more...
I am using Splunk 8.1.0 with the Sankey Diagram 1.5.0 app. My query for the sankey diagram conditionally, based on a form input, excludes zero values. I would like to conditionally—say, based on a token being set—override the ">= 0" legend label with "> 0". I just want to remove the "=" from that label. As a starting point, I've attempted to unconditionally override the sankey diagram CSS. For example: <panel> <html> <p/> <style> .sankey_diagram > div.legend > ul > li:nth-child(1) > span { background-color: red; } </style> </html> </panel> but that's had no effect. (I was considering attempting pure a CSS approach: hiding the original content, then using a pseudo element to inject my override.) It's occurred to me that I could also use a JavaScript document.querySelector to set the content of that span element in the legend, but I'd prefer not to have an external .js file for this.
curl -X PUT -d "enabled"=true --user usernme@account:password https://[Redacted].saas.appdynamics.com/controller/alerting/rest/v1/applications/Application-ID/health-rules/HealthRule-ID/  I am usin... See more...
curl -X PUT -d "enabled"=true --user usernme@account:password https://[Redacted].saas.appdynamics.com/controller/alerting/rest/v1/applications/Application-ID/health-rules/HealthRule-ID/  I am using this command to enable the health rule but no change is reflecting on Saas controller.  ^ Edited by @Ryan.Paredez to remove Controller URL. Please do not share your Controller URL on Community posts for security and privacy reasons.