All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The monitor stanza should specify an index name so Splunk knows where to put the data.  Without that, everything goes in the 'main' index. Your (and everyone else's) search query should specify the ... See more...
The monitor stanza should specify an index name so Splunk knows where to put the data.  Without that, everything goes in the 'main' index. Your (and everyone else's) search query should specify the index name to search.  This makes the query more efficient and avoids reliance on your default index.  The index name in the query must match the index name in the monitor stanza for Splunk to find the data. The message about the tags.conf file is a symptom of a different problem and should be easy to correct.  Go to line 1 of the file specified in the message and URL-encode the value.
Hi @nisheethbaxi , if you're sure to have the backslashes in your logs, you could try this regex: | rex "account_id\\\":\\\"(?<account_id>[^\\]+)" that you can test at https://regex101.com/r/maaQB... See more...
Hi @nisheethbaxi , if you're sure to have the backslashes in your logs, you could try this regex: | rex "account_id\\\":\\\"(?<account_id>[^\\]+)" that you can test at https://regex101.com/r/maaQBE/1 or the following (there's an issue using a regex in Spunk when there's backslash) | rex "account_id\\\\\":\\\\\"(?<account_id>[^\\]+)" Ciao. Giuseppe
Hello, maybe I'm missing some points but it seems that the result is the same
I have a splunk query that has following text in message field -  "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T1... See more...
I have a splunk query that has following text in message field -  "message":"sypher:[tokenized] build successful -\xxxxy {\"data\":{\"account_id\":\"ABC123XYZ\",\"activity\":{\"time\":\"2024-05-31T12:37:25Z\}}" I need to extract value ABC123XYZ which is between account_id\":\" and \",\"activity. I tried the following query but it's not returning any data. index=prod_logs app_name="abc" | rex field=_raw "account_id\\\"\:\\\"(?<accid>[^\"]+)\\\"\,\\\"activity" | where isnotnull (accid) | table accid  
The above suggestions are great, but what worked on my end was simply scrolling to the end of the user agreement ( I think the Splunk creators want us to read through it). I did not have to change an... See more...
The above suggestions are great, but what worked on my end was simply scrolling to the end of the user agreement ( I think the Splunk creators want us to read through it). I did not have to change anything in any of the files listed in the first suggestion'  
If the word list under your tag cloud is displaying the words you expect to see then you might just need to use the format button to define your field label and value. Next to your visual type click ... See more...
If the word list under your tag cloud is displaying the words you expect to see then you might just need to use the format button to define your field label and value. Next to your visual type click Format. Then enter in your field name 'word' value type 'count and then the font sizes you want. I used 100 and 8.
Hi @LearningGuy, I understand that you're not an admin, but roles is the only way to restrict accesses in Splunk. So, ask to your administrators to creare different roles to enable your dashboards ... See more...
Hi @LearningGuy, I understand that you're not an admin, but roles is the only way to restrict accesses in Splunk. So, ask to your administrators to creare different roles to enable your dashboards and knowledhe objects ony to selected (by roles) users. Ciao. Giuseppe
Hi @gcusello , Just to clarify.   I am not an admin,  so it's not possible for me to create a role , correct? Thanks
Well obviously it is possible! The "issue" is that the total emails are counted by user, subject and action, whereas the other two counts are by just user and subject. You could change the eventstats... See more...
Well obviously it is possible! The "issue" is that the total emails are counted by user, subject and action, whereas the other two counts are by just user and subject. You could change the eventstats to correct this | eventstats sum(eval(if(action="quarantined", 1, 0))) as quarantined_count_peruser, sum(eval(if(action="delivered", 1, 0))) as delivered_count_peruser sum(total_emails) as total_emails by src_user, subject
Hello @ITWhisperer , the result should be the total emails count, and the specific count for the delivered and quarantined ones. In my screenshot, there are for example 6 total emails (first row), a... See more...
Hello @ITWhisperer , the result should be the total emails count, and the specific count for the delivered and quarantined ones. In my screenshot, there are for example 6 total emails (first row), and 12 delivered, which is not possible. So the a possible expectation should be: Case1: 6 total emails, 6 delivered, 0 quarantined Case2: 6 total emails, 3 delivered, 3 quarantined Case3: 6 total emails, 1 delivered, 5 quarantined
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:js... See more...
I installed Snort 3 JSON Alerts add-on. I made changes in inputs.conf (/opt/splunk/etc/apps/TA_Snort3_json/local) like this: [monitor:///var/log/snort/*alert_json.txt*] sourcetype = snort3:alert:json When I search for events like below (sourcetype="snort3:alert:json") there is NOTHING But Splunk knows in that path there is something and in what number. Like below.   What I can tell more is what Splunk tells me when starting. Value in stanza [eventtype=snort3:alert:json] in /…/TA_Snort3_json/default/tags.conf, line 1 is not URL encoded: eventtype = snort3:alert:json Your indexes and inputs configurations are not internally consistenst. For more info, run ‘splunk btool check –debug’ Please, help..  
When we built our environment (splunk-related) I checked the splunk docs for some information that could say something about the proper functioning of one indexer I can be mistaken, but in this case... See more...
When we built our environment (splunk-related) I checked the splunk docs for some information that could say something about the proper functioning of one indexer I can be mistaken, but in this case, I selected the indexer color status The API endpoint is "bla bla bla/services/server/info/health_info" If an indexer has green or yellow status, LB decides that node is OK If an indexer has a red status, LB decides that node is not OK and selects another one
Check splunkd.log for replication errors. Verify the AWS security groups allow communication among all indexers on ports 8080 and 9887 and to the Cluster Manager's port 8089.
Try it like this index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" (("TargetID":"abc" "Sender":"SenderID":"abc") OR ("status": "SUCCESS")) | rex "CORRELATION ID :: ... See more...
Try it like this index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" (("TargetID":"abc" "Sender":"SenderID":"abc") OR ("status": "SUCCESS")) | rex "CORRELATION ID :: (?<correlation_id>\S+)" | eval success_id = if(searchmatch("COMPLETED"), correlation_id,null()) | eventstats values(success_id) as success_id by correlation_id | where correlation_id = success_id
In what way is it not what you expected? Please share what you had expected?
Hi  yuanliu ,  Thank you for your reply.. I have tried the search index shared by you, but it doesn't work.  Here we have two different search indexes: 1) request payload:  index="os" ... See more...
Hi  yuanliu ,  Thank you for your reply.. I have tried the search index shared by you, but it doesn't work.  Here we have two different search indexes: 1) request payload:  index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "TargetID":"abc" "Sender":"SenderID":"abc" 2) success payload : index="OS" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "status": "SUCCESS"   I need to query the search index (only for the success payload) in such way that correlation id present in the success payload need to match with Correlation id present in the Request payload. Could you please help me out. NOTE: Different payload has different Correlation ID.
Hello @marysan , it's seems that the result is not as expected:  
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  ... See more...
Hello all, Our current environment is : Three site clustered, 2 clusters on on-premises(14 indexers 7 indexers in each cluster) and one cluster(7 indexers) is hosted on AWS. The AWS indexers are  clustered  recently. It is almost 15 days, but still the replication factor and search factor are not met. What might be the reason and what are all the possible ways that I can resolve this. There are around 300 fixup tasks pending. The number remained the same for the past 2 weeks. I've manually rolled the buckets but still no use. 
First sin is "monitor by health API" - it doesn't tell you anything about availability of syslog input. But from your description it seems that your LB is at least a bit syslog-aware (if you're able... See more...
First sin is "monitor by health API" - it doesn't tell you anything about availability of syslog input. But from your description it seems that your LB is at least a bit syslog-aware (if you're able to extract the payload and resend it as UDP, that's something). What is it if you can share this information?
Hi @493600 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors