All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Since the timestamp is at the beginning of the event, the prefix is a simple "^".  Try these settings TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z The "%Z" is there to interpret the "Z" in... See more...
Since the timestamp is at the beginning of the event, the prefix is a simple "^".  Try these settings TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z The "%Z" is there to interpret the "Z" in the event as a time zone (UTC).
Try this in your props.conf on your HF - I don't know the time zone of your log file so I assume it's UTC, change as needed:   TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %FT%T.%6QZ ... See more...
Try this in your props.conf on your HF - I don't know the time zone of your log file so I assume it's UTC, change as needed:   TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %FT%T.%6QZ TZ = UTC   I highly recommend you read through best practices for parsing configurations here: https://kinneygroup.com/blog/splunk-magic-8-props-conf/ Time variables can be found here: https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Commontimeformatvariables
Thanks for quick response Mr.Rich.
The UF opens port 8089 on  the DS and port 9997 in the indexers.  Port 8089 is for management; port 9997 is for data/logs. Data flows from UF directly to indexers, not via DS. Do NOT put a load bal... See more...
The UF opens port 8089 on  the DS and port 9997 in the indexers.  Port 8089 is for management; port 9997 is for data/logs. Data flows from UF directly to indexers, not via DS. Do NOT put a load balancer between a UF and the indexers.
Yeah, sure 2023-12-12T19:39:25.400399Z <ip_address> CEF:0|vendor|product|version|AuditMessage_707|description|4|messageId=666 messageCategory=AuditMessage start=2023-12-12T19:39:25.400399Z user=user... See more...
Yeah, sure 2023-12-12T19:39:25.400399Z <ip_address> CEF:0|vendor|product|version|AuditMessage_707|description|4|messageId=666 messageCategory=AuditMessage start=2023-12-12T19:39:25.400399Z user=user node=<ip_address> msg=<message>
Hi Rich, I'm asking for sure. 9997 port for sending data and we have indexer cluster structure. What should be the port opening direction? Somebody says "you should open 9997,8089 for DS" and I askin... See more...
Hi Rich, I'm asking for sure. 9997 port for sending data and we have indexer cluster structure. What should be the port opening direction? Somebody says "you should open 9997,8089 for DS" and I asking WHY?. Because we are doing like this but its not a answer. Do the logs go to DS first and then get written to the indexer? 10.10.10.1 UF 10.10.10.2 DS 10.10.10.3 indexer cluster LB 1.Senario: 10.10.10.1 UF --9997,8089--> 10.10.10.2 DS 2.Senario: 10.10.10.1 UF --9997--> 10.10.10.3 indexer cluster LB 10.10.10.1 UF --8089--> 10.10.10.2 DS
You need to add your index time configurations on the HF and not on the SH or Indexers since the HF is where your data is being parsed. Your TIME_PREFIX configuration could be simpler but we would n... See more...
You need to add your index time configurations on the HF and not on the SH or Indexers since the HF is where your data is being parsed. Your TIME_PREFIX configuration could be simpler but we would need to see a full sample log line to help with that; redact any sensitive information by the way if you provide a sample.
"level" is not a valid value for SOURCE_KEY.  Try _raw, instead. [infonull] SOURCE_KEY = _raw REGEX = "level":"info" DEST_KEY = queue FORMAT = nullQueue
Hi - the numbered list provided step by step instructions on searching for the items I mentioned. In addition, the lookup errors you see in the UI usually tells you the name of the lookup related con... See more...
Hi - the numbered list provided step by step instructions on searching for the items I mentioned. In addition, the lookup errors you see in the UI usually tells you the name of the lookup related configuration that's having problems. This doc page should help: https://docs.splunk.com/Documentation/Splunk/9.1.2/Knowledge/Aboutlookupsandfieldactions#Lookup_definitions
Hi! I received an event with the following time string:  2023-12-12T13:39:25.400399Z CEF:0..... This time is already in the correct timezone, but because of Z, splunk adds to 5 hours. I understan... See more...
Hi! I received an event with the following time string:  2023-12-12T13:39:25.400399Z CEF:0..... This time is already in the correct timezone, but because of Z, splunk adds to 5 hours. I understand that Z it is timezone indicator but how i can ignore it? Flow of this event is : Source --> HF --> Indexers. On HF or Indexers i dont have any props or transoforms settings. On Search Heads I extract a few fields from this event and it works. But i can't to extract this time correctly without Z. I put the following regex inside props.conf on my SHs. Also i tried to put this on indexer's props.conf:   TIME_PREFIX = ^\d{2,4}-\d{1,2}-\d{1,2}T\d{1,2}:\d{1,2}:\d{1,2}\.\d{1,6}    I tried to add TZ or TZ_ALIAS inside props.conf, but no effect. Where can I be wrong? Thanks
I am not sure it is possible with Dashboard Studio (which I assume you are using since you mentioned JSON). With Classic / SimpleXML dashboards, you can use a change handler on the dropdown to set ad... See more...
I am not sure it is possible with Dashboard Studio (which I assume you are using since you mentioned JSON). With Classic / SimpleXML dashboards, you can use a change handler on the dropdown to set additional tokens based on the choice made.
Hi, I am facing issue with "no recent logs found for the sourcetype =abc:xyz (example) and index=pqr (example) after 25th November Like we are able to see the logs till 25th of Nov So, please ... See more...
Hi, I am facing issue with "no recent logs found for the sourcetype =abc:xyz (example) and index=pqr (example) after 25th November Like we are able to see the logs till 25th of Nov So, please guide how to check it would be helpful. Thanks, Pooja
Thank you for your response. I'm using explicitly the index="oracle" as an example to confirm when the search works fine for an index event. When I use the same search for Metrics index (replacing in... See more...
Thank you for your response. I'm using explicitly the index="oracle" as an example to confirm when the search works fine for an index event. When I use the same search for Metrics index (replacing index="oracle" by index="murex_metrics")   it doesn't work knowing that we have existent dashboards using this metrics index. here the  example of a metrics index search   | rest splunk_server="local" "/servicesNS/-/-/data/ui/views" | search "eai:data"="*index=murex_metrics*"    thanks
Hi @parthiban , you have only to setup the conditions for the alert: <your_search> | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earl... See more...
Hi @parthiban , you have only to setup the conditions for the alert: <your_search> | stats count(eval(status="offline")) AS offline_count count(eval(status="online")) AS online_count earliest(eval(if(status="offline",_time,""))) AS offline earliest(eval(if(status="online",_time,""))) AS online | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online"), offline_count>0 AND online_count>0 AND online>offline, "Offline"), offline_count=0 AND online_count=0, "No data") | search condition="Offline" OR condition="Offline but newly online" | table condition in this way your alert will trigger the two conditions. Ciao. Giuseppe  
Hi @abhi04 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this ... See more...
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this is either not possible or a bad idea due to possible modifications that have been done by splunk. Is there a documented way to upgrade to MongoDB 6.0 or newer? Thanks.
You can always remove extra fields when they are no longer needed. In this case it's as simple as | fields - count While it's relatively easy to transform the output from my search to something res... See more...
You can always remove extra fields when they are no longer needed. In this case it's as simple as | fields - count While it's relatively easy to transform the output from my search to something resembling "your output" - you just add | stats list(Domain) as Domain list(domaincount) as "Domain Count" by Workstation User You can of course reorder the results with the "fields" command at the end to move some to the left, others to the right. There is one catch - you get two separate multivalued fields - Domain and Domain Count. The "problem" with Splunk is that these are two separate fields and there is no connection between them. So if you wanted to - for example sort one of them and sort the other one accordingly - there's no way to do it unless you zip them together into a list of single values, sort them and then split them back. Ugly. Also if by any chance your search leading to the "stats list()" resulted in partially empty results, you wouldn't get multivalued fields with "holes" in them as the result.  
Sample example logs: {"timestamp":"2023-12-12T15:27:22.890Z","shortmessage":"(abc): def ghi","level":"info","source":"xyz,"file":"/home/abc/def.txt","line":144}
There are two separate things: One is an indexer cluster - oversimplifying a bit it's just a bunch of indexers between which the buckets might be replicated (but don't have to, I've seen clusters wi... See more...
There are two separate things: One is an indexer cluster - oversimplifying a bit it's just a bunch of indexers between which the buckets might be replicated (but don't have to, I've seen clusters with RF=1; it didn't give you HA but had its pros) managed by a CM (possibly redundant in active-passive mode). The single cluster might be "stretched" across several different sites but you still need direct communication between the sites because of management traffic between CM and indexers in all sites and replication traffic between indexers themselves (again - you probably can configure multisite cluster and contain all buckets within a single site but it doesn't make much sense). Another thing is distributed search - you can have several separate indexers or clusters and have a search head (or search head cluster) searching across all your indexers or clusters. There is also another, even more kinky way of searching - federated search - where SH searches not directly from indexers but also from another SH. But let's leave it aside for now. So depending on your business needs and technical constraints you might need one or another architecture. If you have one cluster, the whole cluste has just one CM (possibly with a redundant instance). There's no "splitting cluster among several CMs". Period. So you either need one big cluster or several smaller ones (but again - separate clusters, not one big cluster with serveral smaller CMs - there's no such thing). Which one will be appropriate in your case? That's something you should discuss with a skilled Splunk Architect - that's what you typically engage either Splunk PS or your friendly local Splunk Partner for.
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is on... See more...
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is one of the fields):  "level":"info",   I have tried below and it does not work, can someone help if this is correct or is there another way, the below is in heavy forwarder props: [abc] TRANSFORMS-null = infonull   transforms [infonull] SOURCE_KEY = level REGEX = info DEST_KEY = queue FORMAT = nullQueue